Test Report: Docker_Linux_crio_arm64 21701

                    
                      39a663ec30ddfd049b0783b78fdfbb9970ee2a8a:2025-10-06:41791
                    
                

Test fail (38/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.42
35 TestAddons/parallel/Registry 15.13
36 TestAddons/parallel/RegistryCreds 0.52
37 TestAddons/parallel/Ingress 144.02
38 TestAddons/parallel/InspektorGadget 6.27
39 TestAddons/parallel/MetricsServer 5.45
41 TestAddons/parallel/CSI 47.31
42 TestAddons/parallel/Headlamp 3.42
43 TestAddons/parallel/CloudSpanner 5.27
44 TestAddons/parallel/LocalPath 8.41
45 TestAddons/parallel/NvidiaDevicePlugin 5.25
46 TestAddons/parallel/Yakd 6.26
52 TestForceSystemdFlag 513.67
53 TestForceSystemdEnv 512.93
98 TestFunctional/parallel/ServiceCmdConnect 603.5
126 TestFunctional/parallel/ServiceCmd/DeployApp 600.89
137 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
138 TestFunctional/parallel/ServiceCmd/Format 0.46
139 TestFunctional/parallel/ServiceCmd/URL 0.46
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.1
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.29
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.29
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.3
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.19
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.34
191 TestJSONOutput/pause/Command 1.89
197 TestJSONOutput/unpause/Command 1.55
281 TestPause/serial/Pause 6.64
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.4
303 TestStartStop/group/old-k8s-version/serial/Pause 6.78
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 3.23
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.81
320 TestStartStop/group/no-preload/serial/Pause 7.72
327 TestStartStop/group/embed-certs/serial/Pause 6.83
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.04
334 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.35
343 TestStartStop/group/newest-cni/serial/Pause 7.63
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 8.5
x
+
TestAddons/serial/Volcano (0.42s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-442328 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-442328 addons disable volcano --alsologtostderr -v=1: exit status 11 (422.169103ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 18:45:47.594755   11009 out.go:360] Setting OutFile to fd 1 ...
	I1006 18:45:47.595499   11009 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:45:47.595514   11009 out.go:374] Setting ErrFile to fd 2...
	I1006 18:45:47.595520   11009 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:45:47.595976   11009 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 18:45:47.596293   11009 mustload.go:65] Loading cluster: addons-442328
	I1006 18:45:47.596659   11009 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:45:47.596677   11009 addons.go:606] checking whether the cluster is paused
	I1006 18:45:47.596781   11009 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:45:47.596801   11009 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:45:47.597302   11009 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:45:47.636077   11009 ssh_runner.go:195] Run: systemctl --version
	I1006 18:45:47.636147   11009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:45:47.654197   11009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:45:47.754325   11009 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 18:45:47.754414   11009 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 18:45:47.792370   11009 cri.go:89] found id: "ebe3bd43a396ef825f9ce2383a8f854705eb0fe8cc731998dc8e54caa0c556da"
	I1006 18:45:47.792447   11009 cri.go:89] found id: "4cd182a7b457fdd19f5abf043db3fae987b3a227baa70e7cd0251be02e768d79"
	I1006 18:45:47.792468   11009 cri.go:89] found id: "08ff7320e4ddcb6923c21c744037216228cafd072dc09cfe448a83bc44918154"
	I1006 18:45:47.792478   11009 cri.go:89] found id: "9ca99ca5ed94b679d8bf6fa05d9e992d767dbca537ff3524c1ba82747167969d"
	I1006 18:45:47.792482   11009 cri.go:89] found id: "14f703b6467c68e5f7dedd7a8ccf32a80265d117ab37abf1d2666a32caf5beda"
	I1006 18:45:47.792486   11009 cri.go:89] found id: "6172f7270c3d65c827bb8fa830d763bd379cc07be16d284bc4c59a48ef10709c"
	I1006 18:45:47.792490   11009 cri.go:89] found id: "8b64f9fe4af619af52e8d4111939a9703e02a5d3f0565c752bf3a85d2aee65ec"
	I1006 18:45:47.792494   11009 cri.go:89] found id: "aa4f571114f60f2401085b3f65a45f9b7779dbaf2d7716f39c56f2c69f8f358f"
	I1006 18:45:47.792497   11009 cri.go:89] found id: "e949d4e668015edc95cab568d722d15a1fd49365b917c054c9a337a848a93ac9"
	I1006 18:45:47.792503   11009 cri.go:89] found id: "eaea2302c49adf3c43a2f0f249dc8fe988f23818d829a10e9709a6f20858af7b"
	I1006 18:45:47.792507   11009 cri.go:89] found id: "156dcf2838ea3731822ceba3a07277518b4ba34e9c5891fb49941d98f1c8b0f7"
	I1006 18:45:47.792510   11009 cri.go:89] found id: "f77ff0e3011bc38bba6de70627d082e5a5194d7fefb407f7b46b7b0c832849e8"
	I1006 18:45:47.792513   11009 cri.go:89] found id: "47db75a0747be97ad93e878b9fbe0acf228502f6a1061340fa99f4c190a83619"
	I1006 18:45:47.792517   11009 cri.go:89] found id: "c540f260b155ab1508ef8fdf1254b0816d0e606eb67ae59f3b2feab7ea526b88"
	I1006 18:45:47.792520   11009 cri.go:89] found id: "d982508a1963aa0737bdefbe4ed7bcad12d4779b68c0ee3b75aa5718e1e8cd6f"
	I1006 18:45:47.792534   11009 cri.go:89] found id: "4a454595ca74de219ba83b946bf8f8f21366c2454667c8f983e1248a703aee58"
	I1006 18:45:47.792541   11009 cri.go:89] found id: "1ee286915381b5158fa5a27a330f7e53f554ce0fe7d429c57ec3ea2868775dc6"
	I1006 18:45:47.792547   11009 cri.go:89] found id: "50d122dfe0df6b0e19a330b8d906a26a7160818c3c2342f861b94db61dbde4ce"
	I1006 18:45:47.792551   11009 cri.go:89] found id: "5c527c40e2db3acd1fa72c315739a326754a5efad7aa6b520cc2b554f0eb3c21"
	I1006 18:45:47.792554   11009 cri.go:89] found id: "8f11330482798f65fa8bce0a640073ba69894daff8d3b583e82b1d331b4098ea"
	I1006 18:45:47.792559   11009 cri.go:89] found id: "4b78f6f126d3c9aae8097883cf29eb9096f58eed0eade4dc15720c72d0617907"
	I1006 18:45:47.792563   11009 cri.go:89] found id: "c436c27fc179e1d50b93c6d7a510f2ccc24e197558a2826868724e75f1d2d96f"
	I1006 18:45:47.792567   11009 cri.go:89] found id: "9bb11c78f552514a88dbee09642e0026e29ed4e8e4982ce305d891d657f27d0b"
	I1006 18:45:47.792589   11009 cri.go:89] found id: ""
	I1006 18:45:47.792648   11009 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 18:45:47.808490   11009 out.go:203] 
	W1006 18:45:47.811872   11009 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:45:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:45:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1006 18:45:47.811913   11009 out.go:285] * 
	* 
	W1006 18:45:47.921405   11009 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 18:45:47.924792   11009 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-442328 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.42s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 5.261136ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-k4cb6" [0b43b208-864a-4999-a910-ca608e917e81] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00330919s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-zqdsp" [b9743061-9fde-4f5c-aa4f-c77616b5dfa7] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003742254s
addons_test.go:392: (dbg) Run:  kubectl --context addons-442328 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-442328 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-442328 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.617361959s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-442328 ip
2025/10/06 18:46:12 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-442328 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-442328 addons disable registry --alsologtostderr -v=1: exit status 11 (247.645109ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 18:46:12.187314   11951 out.go:360] Setting OutFile to fd 1 ...
	I1006 18:46:12.187546   11951 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:46:12.187559   11951 out.go:374] Setting ErrFile to fd 2...
	I1006 18:46:12.187565   11951 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:46:12.188404   11951 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 18:46:12.188761   11951 mustload.go:65] Loading cluster: addons-442328
	I1006 18:46:12.189204   11951 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:46:12.189248   11951 addons.go:606] checking whether the cluster is paused
	I1006 18:46:12.189372   11951 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:46:12.189409   11951 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:46:12.189867   11951 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:46:12.207112   11951 ssh_runner.go:195] Run: systemctl --version
	I1006 18:46:12.207166   11951 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:46:12.227015   11951 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:46:12.321954   11951 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 18:46:12.322042   11951 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 18:46:12.353287   11951 cri.go:89] found id: "ebe3bd43a396ef825f9ce2383a8f854705eb0fe8cc731998dc8e54caa0c556da"
	I1006 18:46:12.353313   11951 cri.go:89] found id: "4cd182a7b457fdd19f5abf043db3fae987b3a227baa70e7cd0251be02e768d79"
	I1006 18:46:12.353319   11951 cri.go:89] found id: "08ff7320e4ddcb6923c21c744037216228cafd072dc09cfe448a83bc44918154"
	I1006 18:46:12.353323   11951 cri.go:89] found id: "9ca99ca5ed94b679d8bf6fa05d9e992d767dbca537ff3524c1ba82747167969d"
	I1006 18:46:12.353326   11951 cri.go:89] found id: "14f703b6467c68e5f7dedd7a8ccf32a80265d117ab37abf1d2666a32caf5beda"
	I1006 18:46:12.353330   11951 cri.go:89] found id: "6172f7270c3d65c827bb8fa830d763bd379cc07be16d284bc4c59a48ef10709c"
	I1006 18:46:12.353333   11951 cri.go:89] found id: "8b64f9fe4af619af52e8d4111939a9703e02a5d3f0565c752bf3a85d2aee65ec"
	I1006 18:46:12.353336   11951 cri.go:89] found id: "aa4f571114f60f2401085b3f65a45f9b7779dbaf2d7716f39c56f2c69f8f358f"
	I1006 18:46:12.353339   11951 cri.go:89] found id: "e949d4e668015edc95cab568d722d15a1fd49365b917c054c9a337a848a93ac9"
	I1006 18:46:12.353347   11951 cri.go:89] found id: "eaea2302c49adf3c43a2f0f249dc8fe988f23818d829a10e9709a6f20858af7b"
	I1006 18:46:12.353354   11951 cri.go:89] found id: "156dcf2838ea3731822ceba3a07277518b4ba34e9c5891fb49941d98f1c8b0f7"
	I1006 18:46:12.353357   11951 cri.go:89] found id: "f77ff0e3011bc38bba6de70627d082e5a5194d7fefb407f7b46b7b0c832849e8"
	I1006 18:46:12.353360   11951 cri.go:89] found id: "47db75a0747be97ad93e878b9fbe0acf228502f6a1061340fa99f4c190a83619"
	I1006 18:46:12.353363   11951 cri.go:89] found id: "c540f260b155ab1508ef8fdf1254b0816d0e606eb67ae59f3b2feab7ea526b88"
	I1006 18:46:12.353367   11951 cri.go:89] found id: "d982508a1963aa0737bdefbe4ed7bcad12d4779b68c0ee3b75aa5718e1e8cd6f"
	I1006 18:46:12.353374   11951 cri.go:89] found id: "4a454595ca74de219ba83b946bf8f8f21366c2454667c8f983e1248a703aee58"
	I1006 18:46:12.353378   11951 cri.go:89] found id: "1ee286915381b5158fa5a27a330f7e53f554ce0fe7d429c57ec3ea2868775dc6"
	I1006 18:46:12.353383   11951 cri.go:89] found id: "50d122dfe0df6b0e19a330b8d906a26a7160818c3c2342f861b94db61dbde4ce"
	I1006 18:46:12.353386   11951 cri.go:89] found id: "5c527c40e2db3acd1fa72c315739a326754a5efad7aa6b520cc2b554f0eb3c21"
	I1006 18:46:12.353389   11951 cri.go:89] found id: "8f11330482798f65fa8bce0a640073ba69894daff8d3b583e82b1d331b4098ea"
	I1006 18:46:12.353398   11951 cri.go:89] found id: "4b78f6f126d3c9aae8097883cf29eb9096f58eed0eade4dc15720c72d0617907"
	I1006 18:46:12.353401   11951 cri.go:89] found id: "c436c27fc179e1d50b93c6d7a510f2ccc24e197558a2826868724e75f1d2d96f"
	I1006 18:46:12.353404   11951 cri.go:89] found id: "9bb11c78f552514a88dbee09642e0026e29ed4e8e4982ce305d891d657f27d0b"
	I1006 18:46:12.353408   11951 cri.go:89] found id: ""
	I1006 18:46:12.353464   11951 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 18:46:12.368594   11951 out.go:203] 
	W1006 18:46:12.371472   11951 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:46:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:46:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1006 18:46:12.371508   11951 out.go:285] * 
	* 
	W1006 18:46:12.375375   11951 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 18:46:12.378246   11951 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-442328 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.13s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.52s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.382984ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-442328
addons_test.go:332: (dbg) Run:  kubectl --context addons-442328 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-442328 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-442328 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (248.476114ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 18:46:48.295631   13075 out.go:360] Setting OutFile to fd 1 ...
	I1006 18:46:48.295874   13075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:46:48.295910   13075 out.go:374] Setting ErrFile to fd 2...
	I1006 18:46:48.295932   13075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:46:48.296224   13075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 18:46:48.296535   13075 mustload.go:65] Loading cluster: addons-442328
	I1006 18:46:48.296950   13075 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:46:48.296993   13075 addons.go:606] checking whether the cluster is paused
	I1006 18:46:48.297123   13075 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:46:48.297163   13075 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:46:48.298781   13075 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:46:48.319077   13075 ssh_runner.go:195] Run: systemctl --version
	I1006 18:46:48.319139   13075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:46:48.338076   13075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:46:48.438603   13075 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 18:46:48.438676   13075 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 18:46:48.468967   13075 cri.go:89] found id: "ebe3bd43a396ef825f9ce2383a8f854705eb0fe8cc731998dc8e54caa0c556da"
	I1006 18:46:48.468987   13075 cri.go:89] found id: "4cd182a7b457fdd19f5abf043db3fae987b3a227baa70e7cd0251be02e768d79"
	I1006 18:46:48.468991   13075 cri.go:89] found id: "08ff7320e4ddcb6923c21c744037216228cafd072dc09cfe448a83bc44918154"
	I1006 18:46:48.468995   13075 cri.go:89] found id: "9ca99ca5ed94b679d8bf6fa05d9e992d767dbca537ff3524c1ba82747167969d"
	I1006 18:46:48.469003   13075 cri.go:89] found id: "14f703b6467c68e5f7dedd7a8ccf32a80265d117ab37abf1d2666a32caf5beda"
	I1006 18:46:48.469007   13075 cri.go:89] found id: "6172f7270c3d65c827bb8fa830d763bd379cc07be16d284bc4c59a48ef10709c"
	I1006 18:46:48.469011   13075 cri.go:89] found id: "8b64f9fe4af619af52e8d4111939a9703e02a5d3f0565c752bf3a85d2aee65ec"
	I1006 18:46:48.469015   13075 cri.go:89] found id: "aa4f571114f60f2401085b3f65a45f9b7779dbaf2d7716f39c56f2c69f8f358f"
	I1006 18:46:48.469018   13075 cri.go:89] found id: "e949d4e668015edc95cab568d722d15a1fd49365b917c054c9a337a848a93ac9"
	I1006 18:46:48.469025   13075 cri.go:89] found id: "eaea2302c49adf3c43a2f0f249dc8fe988f23818d829a10e9709a6f20858af7b"
	I1006 18:46:48.469028   13075 cri.go:89] found id: "156dcf2838ea3731822ceba3a07277518b4ba34e9c5891fb49941d98f1c8b0f7"
	I1006 18:46:48.469031   13075 cri.go:89] found id: "f77ff0e3011bc38bba6de70627d082e5a5194d7fefb407f7b46b7b0c832849e8"
	I1006 18:46:48.469034   13075 cri.go:89] found id: "47db75a0747be97ad93e878b9fbe0acf228502f6a1061340fa99f4c190a83619"
	I1006 18:46:48.469037   13075 cri.go:89] found id: "c540f260b155ab1508ef8fdf1254b0816d0e606eb67ae59f3b2feab7ea526b88"
	I1006 18:46:48.469040   13075 cri.go:89] found id: "d982508a1963aa0737bdefbe4ed7bcad12d4779b68c0ee3b75aa5718e1e8cd6f"
	I1006 18:46:48.469044   13075 cri.go:89] found id: "4a454595ca74de219ba83b946bf8f8f21366c2454667c8f983e1248a703aee58"
	I1006 18:46:48.469047   13075 cri.go:89] found id: "1ee286915381b5158fa5a27a330f7e53f554ce0fe7d429c57ec3ea2868775dc6"
	I1006 18:46:48.469051   13075 cri.go:89] found id: "50d122dfe0df6b0e19a330b8d906a26a7160818c3c2342f861b94db61dbde4ce"
	I1006 18:46:48.469054   13075 cri.go:89] found id: "5c527c40e2db3acd1fa72c315739a326754a5efad7aa6b520cc2b554f0eb3c21"
	I1006 18:46:48.469057   13075 cri.go:89] found id: "8f11330482798f65fa8bce0a640073ba69894daff8d3b583e82b1d331b4098ea"
	I1006 18:46:48.469062   13075 cri.go:89] found id: "4b78f6f126d3c9aae8097883cf29eb9096f58eed0eade4dc15720c72d0617907"
	I1006 18:46:48.469065   13075 cri.go:89] found id: "c436c27fc179e1d50b93c6d7a510f2ccc24e197558a2826868724e75f1d2d96f"
	I1006 18:46:48.469068   13075 cri.go:89] found id: "9bb11c78f552514a88dbee09642e0026e29ed4e8e4982ce305d891d657f27d0b"
	I1006 18:46:48.469070   13075 cri.go:89] found id: ""
	I1006 18:46:48.469122   13075 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 18:46:48.485205   13075 out.go:203] 
	W1006 18:46:48.488386   13075 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:46:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:46:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1006 18:46:48.488416   13075 out.go:285] * 
	* 
	W1006 18:46:48.492237   13075 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 18:46:48.495118   13075 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-442328 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.52s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (144.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-442328 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-442328 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-442328 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [7953cb30-0c85-429c-9566-6c34485a4b8f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [7953cb30-0c85-429c-9566-6c34485a4b8f] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003707523s
I1006 18:46:33.774695    4350 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-442328 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-442328 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.803506773s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-442328 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-442328 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-442328
helpers_test.go:243: (dbg) docker inspect addons-442328:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8c722e206d4345afe9840c25fc51924a9dba29c6ed70fc1f945ffae17f5dfd27",
	        "Created": "2025-10-06T18:43:15.291490596Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 5496,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T18:43:15.326921135Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/8c722e206d4345afe9840c25fc51924a9dba29c6ed70fc1f945ffae17f5dfd27/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8c722e206d4345afe9840c25fc51924a9dba29c6ed70fc1f945ffae17f5dfd27/hostname",
	        "HostsPath": "/var/lib/docker/containers/8c722e206d4345afe9840c25fc51924a9dba29c6ed70fc1f945ffae17f5dfd27/hosts",
	        "LogPath": "/var/lib/docker/containers/8c722e206d4345afe9840c25fc51924a9dba29c6ed70fc1f945ffae17f5dfd27/8c722e206d4345afe9840c25fc51924a9dba29c6ed70fc1f945ffae17f5dfd27-json.log",
	        "Name": "/addons-442328",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-442328:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-442328",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8c722e206d4345afe9840c25fc51924a9dba29c6ed70fc1f945ffae17f5dfd27",
	                "LowerDir": "/var/lib/docker/overlay2/f8a87c57fa681dec37b192b095c869341820ceba06c5aaccd958b28f6010eb9c-init/diff:/var/lib/docker/overlay2/950dad8a7486c25883a9845454dded3949e8f3a53166d005e6749ff210bcab80/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f8a87c57fa681dec37b192b095c869341820ceba06c5aaccd958b28f6010eb9c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f8a87c57fa681dec37b192b095c869341820ceba06c5aaccd958b28f6010eb9c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f8a87c57fa681dec37b192b095c869341820ceba06c5aaccd958b28f6010eb9c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-442328",
	                "Source": "/var/lib/docker/volumes/addons-442328/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-442328",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-442328",
	                "name.minikube.sigs.k8s.io": "addons-442328",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5dedfdd0082a1a6a774e09d3db845f6e7f9ebdf4cff2de96d32aab0812c516f9",
	            "SandboxKey": "/var/run/docker/netns/5dedfdd0082a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-442328": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:88:24:be:c7:dc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fdf859d8abb0bd92a77b6e58cb3c4758719b7831e18c6319222c118bbb6e751f",
	                    "EndpointID": "a7ec9b4e92d6f099b561a3d1ead564eb71b2e42423303c12997b6d2c18b0d31a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-442328",
	                        "8c722e206d43"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-442328 -n addons-442328
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-442328 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-442328 logs -n 25: (1.794803166s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-993189                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-993189 │ jenkins │ v1.37.0 │ 06 Oct 25 18:42 UTC │ 06 Oct 25 18:42 UTC │
	│ start   │ --download-only -p binary-mirror-895506 --alsologtostderr --binary-mirror http://127.0.0.1:38653 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-895506   │ jenkins │ v1.37.0 │ 06 Oct 25 18:42 UTC │                     │
	│ delete  │ -p binary-mirror-895506                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-895506   │ jenkins │ v1.37.0 │ 06 Oct 25 18:42 UTC │ 06 Oct 25 18:42 UTC │
	│ addons  │ disable dashboard -p addons-442328                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-442328          │ jenkins │ v1.37.0 │ 06 Oct 25 18:42 UTC │                     │
	│ addons  │ enable dashboard -p addons-442328                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-442328          │ jenkins │ v1.37.0 │ 06 Oct 25 18:42 UTC │                     │
	│ start   │ -p addons-442328 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-442328          │ jenkins │ v1.37.0 │ 06 Oct 25 18:42 UTC │ 06 Oct 25 18:45 UTC │
	│ addons  │ addons-442328 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-442328          │ jenkins │ v1.37.0 │ 06 Oct 25 18:45 UTC │                     │
	│ addons  │ addons-442328 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-442328          │ jenkins │ v1.37.0 │ 06 Oct 25 18:45 UTC │                     │
	│ addons  │ enable headlamp -p addons-442328 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-442328          │ jenkins │ v1.37.0 │ 06 Oct 25 18:45 UTC │                     │
	│ addons  │ addons-442328 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-442328          │ jenkins │ v1.37.0 │ 06 Oct 25 18:46 UTC │                     │
	│ ip      │ addons-442328 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-442328          │ jenkins │ v1.37.0 │ 06 Oct 25 18:46 UTC │ 06 Oct 25 18:46 UTC │
	│ addons  │ addons-442328 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-442328          │ jenkins │ v1.37.0 │ 06 Oct 25 18:46 UTC │                     │
	│ addons  │ addons-442328 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-442328          │ jenkins │ v1.37.0 │ 06 Oct 25 18:46 UTC │                     │
	│ addons  │ addons-442328 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-442328          │ jenkins │ v1.37.0 │ 06 Oct 25 18:46 UTC │                     │
	│ ssh     │ addons-442328 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-442328          │ jenkins │ v1.37.0 │ 06 Oct 25 18:46 UTC │                     │
	│ addons  │ addons-442328 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-442328          │ jenkins │ v1.37.0 │ 06 Oct 25 18:46 UTC │                     │
	│ addons  │ addons-442328 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-442328          │ jenkins │ v1.37.0 │ 06 Oct 25 18:46 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-442328                                                                                                                                                                                                                                                                                                                                                                                           │ addons-442328          │ jenkins │ v1.37.0 │ 06 Oct 25 18:46 UTC │ 06 Oct 25 18:46 UTC │
	│ addons  │ addons-442328 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-442328          │ jenkins │ v1.37.0 │ 06 Oct 25 18:46 UTC │                     │
	│ ssh     │ addons-442328 ssh cat /opt/local-path-provisioner/pvc-2601b9e6-af89-4f79-9a4d-c4aea7149f93_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-442328          │ jenkins │ v1.37.0 │ 06 Oct 25 18:46 UTC │ 06 Oct 25 18:46 UTC │
	│ addons  │ addons-442328 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-442328          │ jenkins │ v1.37.0 │ 06 Oct 25 18:46 UTC │                     │
	│ addons  │ addons-442328 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-442328          │ jenkins │ v1.37.0 │ 06 Oct 25 18:47 UTC │                     │
	│ addons  │ addons-442328 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-442328          │ jenkins │ v1.37.0 │ 06 Oct 25 18:47 UTC │                     │
	│ addons  │ addons-442328 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-442328          │ jenkins │ v1.37.0 │ 06 Oct 25 18:47 UTC │                     │
	│ ip      │ addons-442328 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-442328          │ jenkins │ v1.37.0 │ 06 Oct 25 18:48 UTC │ 06 Oct 25 18:48 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 18:42:49
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 18:42:49.193168    5102 out.go:360] Setting OutFile to fd 1 ...
	I1006 18:42:49.193282    5102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:42:49.193291    5102 out.go:374] Setting ErrFile to fd 2...
	I1006 18:42:49.193296    5102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:42:49.193564    5102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 18:42:49.194017    5102 out.go:368] Setting JSON to false
	I1006 18:42:49.194742    5102 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1505,"bootTime":1759774665,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1006 18:42:49.194805    5102 start.go:140] virtualization:  
	I1006 18:42:49.198135    5102 out.go:179] * [addons-442328] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 18:42:49.201801    5102 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 18:42:49.201911    5102 notify.go:220] Checking for updates...
	I1006 18:42:49.207479    5102 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 18:42:49.210372    5102 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 18:42:49.213143    5102 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	I1006 18:42:49.215990    5102 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 18:42:49.218803    5102 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 18:42:49.221915    5102 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 18:42:49.241626    5102 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 18:42:49.241753    5102 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 18:42:49.307850    5102 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-06 18:42:49.298625415 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 18:42:49.307964    5102 docker.go:318] overlay module found
	I1006 18:42:49.311012    5102 out.go:179] * Using the docker driver based on user configuration
	I1006 18:42:49.313754    5102 start.go:304] selected driver: docker
	I1006 18:42:49.313773    5102 start.go:924] validating driver "docker" against <nil>
	I1006 18:42:49.313786    5102 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 18:42:49.314514    5102 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 18:42:49.368529    5102 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-06 18:42:49.35974875 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 18:42:49.368700    5102 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 18:42:49.368926    5102 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 18:42:49.371826    5102 out.go:179] * Using Docker driver with root privileges
	I1006 18:42:49.374533    5102 cni.go:84] Creating CNI manager for ""
	I1006 18:42:49.374596    5102 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 18:42:49.374609    5102 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 18:42:49.374685    5102 start.go:348] cluster config:
	{Name:addons-442328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-442328 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1006 18:42:49.377779    5102 out.go:179] * Starting "addons-442328" primary control-plane node in "addons-442328" cluster
	I1006 18:42:49.380637    5102 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 18:42:49.383578    5102 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 18:42:49.386470    5102 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 18:42:49.386508    5102 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 18:42:49.386523    5102 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1006 18:42:49.386532    5102 cache.go:58] Caching tarball of preloaded images
	I1006 18:42:49.386616    5102 preload.go:233] Found /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1006 18:42:49.386626    5102 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 18:42:49.386957    5102 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/config.json ...
	I1006 18:42:49.386987    5102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/config.json: {Name:mkc263948d35758166b9227c0ae8aa20bda1f9b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:42:49.403843    5102 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1006 18:42:49.404006    5102 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1006 18:42:49.404032    5102 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1006 18:42:49.404037    5102 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1006 18:42:49.404044    5102 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1006 18:42:49.404049    5102 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
	I1006 18:43:07.500560    5102 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
	I1006 18:43:07.500606    5102 cache.go:232] Successfully downloaded all kic artifacts
	I1006 18:43:07.500650    5102 start.go:360] acquireMachinesLock for addons-442328: {Name:mk9b46ab2957a6d941347e6c3488c1e2b2f2ea3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 18:43:07.500763    5102 start.go:364] duration metric: took 90.013µs to acquireMachinesLock for "addons-442328"
	I1006 18:43:07.500792    5102 start.go:93] Provisioning new machine with config: &{Name:addons-442328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-442328 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 18:43:07.500860    5102 start.go:125] createHost starting for "" (driver="docker")
	I1006 18:43:07.502741    5102 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1006 18:43:07.502957    5102 start.go:159] libmachine.API.Create for "addons-442328" (driver="docker")
	I1006 18:43:07.503002    5102 client.go:168] LocalClient.Create starting
	I1006 18:43:07.503115    5102 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem
	I1006 18:43:07.790167    5102 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem
	I1006 18:43:08.322291    5102 cli_runner.go:164] Run: docker network inspect addons-442328 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 18:43:08.338122    5102 cli_runner.go:211] docker network inspect addons-442328 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 18:43:08.338216    5102 network_create.go:284] running [docker network inspect addons-442328] to gather additional debugging logs...
	I1006 18:43:08.338237    5102 cli_runner.go:164] Run: docker network inspect addons-442328
	W1006 18:43:08.354780    5102 cli_runner.go:211] docker network inspect addons-442328 returned with exit code 1
	I1006 18:43:08.354812    5102 network_create.go:287] error running [docker network inspect addons-442328]: docker network inspect addons-442328: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-442328 not found
	I1006 18:43:08.354826    5102 network_create.go:289] output of [docker network inspect addons-442328]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-442328 not found
	
	** /stderr **
	I1006 18:43:08.354940    5102 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 18:43:08.371239    5102 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001946930}
	I1006 18:43:08.371280    5102 network_create.go:124] attempt to create docker network addons-442328 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1006 18:43:08.371335    5102 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-442328 addons-442328
	I1006 18:43:08.425026    5102 network_create.go:108] docker network addons-442328 192.168.49.0/24 created
	I1006 18:43:08.425062    5102 kic.go:121] calculated static IP "192.168.49.2" for the "addons-442328" container
	I1006 18:43:08.425144    5102 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 18:43:08.439785    5102 cli_runner.go:164] Run: docker volume create addons-442328 --label name.minikube.sigs.k8s.io=addons-442328 --label created_by.minikube.sigs.k8s.io=true
	I1006 18:43:08.456708    5102 oci.go:103] Successfully created a docker volume addons-442328
	I1006 18:43:08.456801    5102 cli_runner.go:164] Run: docker run --rm --name addons-442328-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-442328 --entrypoint /usr/bin/test -v addons-442328:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 18:43:10.820901    5102 cli_runner.go:217] Completed: docker run --rm --name addons-442328-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-442328 --entrypoint /usr/bin/test -v addons-442328:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (2.364061654s)
	I1006 18:43:10.820928    5102 oci.go:107] Successfully prepared a docker volume addons-442328
	I1006 18:43:10.820963    5102 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 18:43:10.820980    5102 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 18:43:10.821036    5102 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-442328:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 18:43:15.222819    5102 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-442328:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.401748315s)
	I1006 18:43:15.222849    5102 kic.go:203] duration metric: took 4.401866087s to extract preloaded images to volume ...
	W1006 18:43:15.223006    5102 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1006 18:43:15.223123    5102 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 18:43:15.273689    5102 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-442328 --name addons-442328 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-442328 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-442328 --network addons-442328 --ip 192.168.49.2 --volume addons-442328:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 18:43:15.592683    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Running}}
	I1006 18:43:15.619290    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:15.639100    5102 cli_runner.go:164] Run: docker exec addons-442328 stat /var/lib/dpkg/alternatives/iptables
	I1006 18:43:15.703188    5102 oci.go:144] the created container "addons-442328" has a running status.
	I1006 18:43:15.703214    5102 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa...
	I1006 18:43:16.761927    5102 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 18:43:16.782342    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:16.798890    5102 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 18:43:16.798915    5102 kic_runner.go:114] Args: [docker exec --privileged addons-442328 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 18:43:16.839046    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:16.857168    5102 machine.go:93] provisionDockerMachine start ...
	I1006 18:43:16.857270    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:16.873873    5102 main.go:141] libmachine: Using SSH client type: native
	I1006 18:43:16.874197    5102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1006 18:43:16.874213    5102 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 18:43:16.874791    5102 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57412->127.0.0.1:32768: read: connection reset by peer
	I1006 18:43:20.007192    5102 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-442328
	
	I1006 18:43:20.007215    5102 ubuntu.go:182] provisioning hostname "addons-442328"
	I1006 18:43:20.007288    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:20.035645    5102 main.go:141] libmachine: Using SSH client type: native
	I1006 18:43:20.035993    5102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1006 18:43:20.036016    5102 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-442328 && echo "addons-442328" | sudo tee /etc/hostname
	I1006 18:43:20.176974    5102 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-442328
	
	I1006 18:43:20.177054    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:20.195363    5102 main.go:141] libmachine: Using SSH client type: native
	I1006 18:43:20.195682    5102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1006 18:43:20.195733    5102 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-442328' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-442328/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-442328' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 18:43:20.327949    5102 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 18:43:20.327978    5102 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-2540/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-2540/.minikube}
	I1006 18:43:20.328000    5102 ubuntu.go:190] setting up certificates
	I1006 18:43:20.328009    5102 provision.go:84] configureAuth start
	I1006 18:43:20.328081    5102 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-442328
	I1006 18:43:20.345715    5102 provision.go:143] copyHostCerts
	I1006 18:43:20.345800    5102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem (1123 bytes)
	I1006 18:43:20.345922    5102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem (1675 bytes)
	I1006 18:43:20.346021    5102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem (1082 bytes)
	I1006 18:43:20.346071    5102 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem org=jenkins.addons-442328 san=[127.0.0.1 192.168.49.2 addons-442328 localhost minikube]
	I1006 18:43:20.621280    5102 provision.go:177] copyRemoteCerts
	I1006 18:43:20.621347    5102 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 18:43:20.621387    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:20.638102    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:20.731529    5102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1006 18:43:20.748729    5102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 18:43:20.765667    5102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 18:43:20.783278    5102 provision.go:87] duration metric: took 455.244116ms to configureAuth
	I1006 18:43:20.783302    5102 ubuntu.go:206] setting minikube options for container-runtime
	I1006 18:43:20.783495    5102 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:43:20.783595    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:20.801662    5102 main.go:141] libmachine: Using SSH client type: native
	I1006 18:43:20.801962    5102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1006 18:43:20.801982    5102 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 18:43:21.042931    5102 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 18:43:21.042955    5102 machine.go:96] duration metric: took 4.185759304s to provisionDockerMachine
	I1006 18:43:21.042965    5102 client.go:171] duration metric: took 13.539951773s to LocalClient.Create
	I1006 18:43:21.042979    5102 start.go:167] duration metric: took 13.540022667s to libmachine.API.Create "addons-442328"
	I1006 18:43:21.042985    5102 start.go:293] postStartSetup for "addons-442328" (driver="docker")
	I1006 18:43:21.042995    5102 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 18:43:21.043063    5102 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 18:43:21.043108    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:21.061975    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:21.160682    5102 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 18:43:21.164093    5102 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 18:43:21.164122    5102 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 18:43:21.164133    5102 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/addons for local assets ...
	I1006 18:43:21.164243    5102 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/files for local assets ...
	I1006 18:43:21.164272    5102 start.go:296] duration metric: took 121.280542ms for postStartSetup
	I1006 18:43:21.164599    5102 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-442328
	I1006 18:43:21.183981    5102 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/config.json ...
	I1006 18:43:21.184289    5102 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 18:43:21.184339    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:21.201326    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:21.292655    5102 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 18:43:21.297268    5102 start.go:128] duration metric: took 13.796393502s to createHost
	I1006 18:43:21.297295    5102 start.go:83] releasing machines lock for "addons-442328", held for 13.796517805s
	I1006 18:43:21.297371    5102 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-442328
	I1006 18:43:21.318256    5102 ssh_runner.go:195] Run: cat /version.json
	I1006 18:43:21.318323    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:21.318600    5102 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 18:43:21.318667    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:21.344740    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:21.350698    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:21.529460    5102 ssh_runner.go:195] Run: systemctl --version
	I1006 18:43:21.535758    5102 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 18:43:21.570497    5102 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 18:43:21.574753    5102 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 18:43:21.574821    5102 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 18:43:21.603028    5102 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1006 18:43:21.603109    5102 start.go:495] detecting cgroup driver to use...
	I1006 18:43:21.603153    5102 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 18:43:21.603230    5102 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 18:43:21.620119    5102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 18:43:21.633155    5102 docker.go:218] disabling cri-docker service (if available) ...
	I1006 18:43:21.633216    5102 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 18:43:21.650420    5102 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 18:43:21.668847    5102 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 18:43:21.780099    5102 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 18:43:21.892978    5102 docker.go:234] disabling docker service ...
	I1006 18:43:21.893044    5102 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 18:43:21.913413    5102 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 18:43:21.926298    5102 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 18:43:22.035873    5102 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 18:43:22.168121    5102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 18:43:22.186851    5102 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 18:43:22.203194    5102 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 18:43:22.203311    5102 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 18:43:22.212806    5102 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 18:43:22.212931    5102 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 18:43:22.222253    5102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 18:43:22.231100    5102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 18:43:22.240003    5102 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 18:43:22.248216    5102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 18:43:22.257667    5102 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 18:43:22.271036    5102 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 18:43:22.279875    5102 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 18:43:22.287288    5102 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1006 18:43:22.287350    5102 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1006 18:43:22.300209    5102 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 18:43:22.308036    5102 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 18:43:22.424046    5102 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 18:43:22.550314    5102 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 18:43:22.550404    5102 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 18:43:22.554446    5102 start.go:563] Will wait 60s for crictl version
	I1006 18:43:22.554506    5102 ssh_runner.go:195] Run: which crictl
	I1006 18:43:22.558096    5102 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 18:43:22.586903    5102 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 18:43:22.587037    5102 ssh_runner.go:195] Run: crio --version
	I1006 18:43:22.614871    5102 ssh_runner.go:195] Run: crio --version
	I1006 18:43:22.648561    5102 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 18:43:22.651482    5102 cli_runner.go:164] Run: docker network inspect addons-442328 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 18:43:22.667826    5102 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 18:43:22.671674    5102 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 18:43:22.681129    5102 kubeadm.go:883] updating cluster {Name:addons-442328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-442328 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 18:43:22.681249    5102 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 18:43:22.681317    5102 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 18:43:22.711566    5102 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 18:43:22.711591    5102 crio.go:433] Images already preloaded, skipping extraction
	I1006 18:43:22.711650    5102 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 18:43:22.738118    5102 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 18:43:22.738144    5102 cache_images.go:85] Images are preloaded, skipping loading
	I1006 18:43:22.738152    5102 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1006 18:43:22.738235    5102 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-442328 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-442328 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 18:43:22.738323    5102 ssh_runner.go:195] Run: crio config
	I1006 18:43:22.801492    5102 cni.go:84] Creating CNI manager for ""
	I1006 18:43:22.801522    5102 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 18:43:22.801554    5102 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 18:43:22.801599    5102 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-442328 NodeName:addons-442328 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 18:43:22.801765    5102 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-442328"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 18:43:22.801862    5102 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 18:43:22.809500    5102 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 18:43:22.809565    5102 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 18:43:22.817199    5102 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1006 18:43:22.830052    5102 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 18:43:22.842977    5102 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1006 18:43:22.855903    5102 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 18:43:22.859462    5102 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 18:43:22.869332    5102 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 18:43:22.982062    5102 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 18:43:22.997007    5102 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328 for IP: 192.168.49.2
	I1006 18:43:22.997026    5102 certs.go:195] generating shared ca certs ...
	I1006 18:43:22.997042    5102 certs.go:227] acquiring lock for ca certs: {Name:mke29bef1b13829052576090d66d8864f7cbc64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:43:22.997202    5102 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key
	I1006 18:43:23.368698    5102 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt ...
	I1006 18:43:23.368731    5102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt: {Name:mka617cd9c96ec7552efe1c89ec4ced838347d5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:43:23.368949    5102 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key ...
	I1006 18:43:23.368964    5102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key: {Name:mke1c1853952a570e1a6b7df9f26798abd52a483 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:43:23.369052    5102 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key
	I1006 18:43:23.638628    5102 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt ...
	I1006 18:43:23.638656    5102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt: {Name:mk21b0d3f2c3741323f78c6e5b90fd5edf1600c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:43:23.638828    5102 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key ...
	I1006 18:43:23.638843    5102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key: {Name:mkc60242033f0c1489cf5efd77a0632df75dfd8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:43:23.638921    5102 certs.go:257] generating profile certs ...
	I1006 18:43:23.638977    5102 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.key
	I1006 18:43:23.638995    5102 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt with IP's: []
	I1006 18:43:24.368839    5102 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt ...
	I1006 18:43:24.368871    5102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: {Name:mk30bee72787735fc1483520ea973c848a6f59e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:43:24.369064    5102 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.key ...
	I1006 18:43:24.369078    5102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.key: {Name:mk07650efea402c2b338a2dbdaa79ebf4302f8c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:43:24.369162    5102 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/apiserver.key.6d1bc007
	I1006 18:43:24.369182    5102 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/apiserver.crt.6d1bc007 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1006 18:43:24.816514    5102 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/apiserver.crt.6d1bc007 ...
	I1006 18:43:24.816582    5102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/apiserver.crt.6d1bc007: {Name:mke12e9a0fca58dd6de1a580e6ee3de06c1467e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:43:24.816760    5102 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/apiserver.key.6d1bc007 ...
	I1006 18:43:24.816774    5102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/apiserver.key.6d1bc007: {Name:mk85db778aa25f6bed0146ae08bba8008aae1249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:43:24.816856    5102 certs.go:382] copying /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/apiserver.crt.6d1bc007 -> /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/apiserver.crt
	I1006 18:43:24.816945    5102 certs.go:386] copying /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/apiserver.key.6d1bc007 -> /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/apiserver.key
	I1006 18:43:24.817001    5102 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/proxy-client.key
	I1006 18:43:24.817016    5102 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/proxy-client.crt with IP's: []
	I1006 18:43:25.492664    5102 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/proxy-client.crt ...
	I1006 18:43:25.492692    5102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/proxy-client.crt: {Name:mk95252de734ce611d9878c6be63fb0c316d5a59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:43:25.492883    5102 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/proxy-client.key ...
	I1006 18:43:25.492896    5102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/proxy-client.key: {Name:mk2e81d3db589c57f42c7532f610f5b21bf55a13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:43:25.493088    5102 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 18:43:25.493137    5102 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem (1082 bytes)
	I1006 18:43:25.493170    5102 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem (1123 bytes)
	I1006 18:43:25.493197    5102 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem (1675 bytes)
	I1006 18:43:25.493782    5102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 18:43:25.512945    5102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 18:43:25.530839    5102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 18:43:25.548015    5102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 18:43:25.565350    5102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1006 18:43:25.582383    5102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 18:43:25.599050    5102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 18:43:25.616410    5102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 18:43:25.633030    5102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 18:43:25.650428    5102 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 18:43:25.662953    5102 ssh_runner.go:195] Run: openssl version
	I1006 18:43:25.669405    5102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 18:43:25.677483    5102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 18:43:25.680946    5102 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I1006 18:43:25.681014    5102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 18:43:25.721712    5102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 18:43:25.729644    5102 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 18:43:25.732884    5102 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 18:43:25.732971    5102 kubeadm.go:400] StartCluster: {Name:addons-442328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-442328 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 18:43:25.733054    5102 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 18:43:25.733117    5102 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 18:43:25.758993    5102 cri.go:89] found id: ""
	I1006 18:43:25.759067    5102 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 18:43:25.766606    5102 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 18:43:25.774282    5102 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 18:43:25.774370    5102 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 18:43:25.781909    5102 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 18:43:25.781929    5102 kubeadm.go:157] found existing configuration files:
	
	I1006 18:43:25.781978    5102 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 18:43:25.789728    5102 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 18:43:25.789816    5102 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 18:43:25.797046    5102 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 18:43:25.804925    5102 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 18:43:25.804991    5102 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 18:43:25.812277    5102 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 18:43:25.819865    5102 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 18:43:25.819969    5102 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 18:43:25.827177    5102 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 18:43:25.834703    5102 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 18:43:25.834788    5102 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 18:43:25.842070    5102 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 18:43:25.880030    5102 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 18:43:25.880333    5102 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 18:43:25.906326    5102 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 18:43:25.906400    5102 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1006 18:43:25.906443    5102 kubeadm.go:318] OS: Linux
	I1006 18:43:25.906495    5102 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 18:43:25.906553    5102 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1006 18:43:25.906608    5102 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 18:43:25.906662    5102 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 18:43:25.906717    5102 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 18:43:25.906769    5102 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 18:43:25.906820    5102 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 18:43:25.906874    5102 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 18:43:25.906926    5102 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1006 18:43:25.976304    5102 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 18:43:25.976421    5102 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 18:43:25.976530    5102 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 18:43:25.988267    5102 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 18:43:25.992666    5102 out.go:252]   - Generating certificates and keys ...
	I1006 18:43:25.992767    5102 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 18:43:25.992838    5102 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 18:43:26.215189    5102 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 18:43:26.500940    5102 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 18:43:26.771480    5102 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 18:43:26.934275    5102 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 18:43:27.092096    5102 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 18:43:27.092408    5102 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-442328 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 18:43:27.985084    5102 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 18:43:27.985285    5102 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-442328 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 18:43:28.126426    5102 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 18:43:28.547748    5102 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 18:43:28.948390    5102 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 18:43:28.948791    5102 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 18:43:29.062838    5102 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 18:43:29.801180    5102 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 18:43:30.310452    5102 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 18:43:30.621542    5102 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 18:43:30.906024    5102 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 18:43:30.906611    5102 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 18:43:30.909487    5102 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 18:43:30.912808    5102 out.go:252]   - Booting up control plane ...
	I1006 18:43:30.912913    5102 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 18:43:30.912999    5102 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 18:43:30.913708    5102 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 18:43:30.928792    5102 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 18:43:30.929132    5102 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 18:43:30.937826    5102 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 18:43:30.937933    5102 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 18:43:30.937998    5102 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 18:43:31.066546    5102 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 18:43:31.066692    5102 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 18:43:33.068057    5102 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001611919s
	I1006 18:43:33.071369    5102 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 18:43:33.071468    5102 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 18:43:33.071865    5102 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 18:43:33.071957    5102 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 18:43:36.139955    5102 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.06796515s
	I1006 18:43:37.385194    5102 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.313824885s
	I1006 18:43:39.073895    5102 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.002266281s
	I1006 18:43:39.093589    5102 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1006 18:43:39.109863    5102 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1006 18:43:39.127400    5102 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1006 18:43:39.127653    5102 kubeadm.go:318] [mark-control-plane] Marking the node addons-442328 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1006 18:43:39.139134    5102 kubeadm.go:318] [bootstrap-token] Using token: b915w2.j1hxlaumrltogjrr
	I1006 18:43:39.142199    5102 out.go:252]   - Configuring RBAC rules ...
	I1006 18:43:39.142342    5102 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1006 18:43:39.146669    5102 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1006 18:43:39.154447    5102 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1006 18:43:39.160608    5102 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1006 18:43:39.164753    5102 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1006 18:43:39.168766    5102 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1006 18:43:39.481302    5102 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1006 18:43:39.925083    5102 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1006 18:43:40.482597    5102 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1006 18:43:40.483691    5102 kubeadm.go:318] 
	I1006 18:43:40.483796    5102 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1006 18:43:40.483802    5102 kubeadm.go:318] 
	I1006 18:43:40.483879    5102 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1006 18:43:40.483884    5102 kubeadm.go:318] 
	I1006 18:43:40.483910    5102 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1006 18:43:40.483968    5102 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1006 18:43:40.484017    5102 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1006 18:43:40.484022    5102 kubeadm.go:318] 
	I1006 18:43:40.484075    5102 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1006 18:43:40.484079    5102 kubeadm.go:318] 
	I1006 18:43:40.484126    5102 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1006 18:43:40.484130    5102 kubeadm.go:318] 
	I1006 18:43:40.484182    5102 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1006 18:43:40.484256    5102 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1006 18:43:40.484323    5102 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1006 18:43:40.484328    5102 kubeadm.go:318] 
	I1006 18:43:40.484412    5102 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1006 18:43:40.484518    5102 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1006 18:43:40.484524    5102 kubeadm.go:318] 
	I1006 18:43:40.484607    5102 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token b915w2.j1hxlaumrltogjrr \
	I1006 18:43:40.484716    5102 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:201c13f59a2d25d619915dd6f8e3b060debb373b102f8c8209c331adce5a89dd \
	I1006 18:43:40.484737    5102 kubeadm.go:318] 	--control-plane 
	I1006 18:43:40.484742    5102 kubeadm.go:318] 
	I1006 18:43:40.484831    5102 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1006 18:43:40.484836    5102 kubeadm.go:318] 
	I1006 18:43:40.484917    5102 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token b915w2.j1hxlaumrltogjrr \
	I1006 18:43:40.485025    5102 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:201c13f59a2d25d619915dd6f8e3b060debb373b102f8c8209c331adce5a89dd 
	I1006 18:43:40.488571    5102 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1006 18:43:40.488823    5102 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1006 18:43:40.488936    5102 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 18:43:40.488955    5102 cni.go:84] Creating CNI manager for ""
	I1006 18:43:40.488963    5102 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 18:43:40.491941    5102 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1006 18:43:40.494826    5102 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1006 18:43:40.498949    5102 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1006 18:43:40.498972    5102 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1006 18:43:40.512740    5102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1006 18:43:40.782762    5102 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1006 18:43:40.782854    5102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 18:43:40.782907    5102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-442328 minikube.k8s.io/updated_at=2025_10_06T18_43_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81 minikube.k8s.io/name=addons-442328 minikube.k8s.io/primary=true
	I1006 18:43:40.922860    5102 ops.go:34] apiserver oom_adj: -16
	I1006 18:43:40.935117    5102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 18:43:41.435222    5102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 18:43:41.936116    5102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 18:43:42.435186    5102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 18:43:42.935225    5102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 18:43:43.435210    5102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 18:43:43.935200    5102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 18:43:44.435820    5102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 18:43:44.935133    5102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 18:43:45.435395    5102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 18:43:45.589482    5102 kubeadm.go:1113] duration metric: took 4.806676019s to wait for elevateKubeSystemPrivileges
	I1006 18:43:45.589517    5102 kubeadm.go:402] duration metric: took 19.856550715s to StartCluster
	I1006 18:43:45.589534    5102 settings.go:142] acquiring lock: {Name:mkaade96c66593c653256adaeb0a3029ca60e0c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:43:45.589656    5102 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 18:43:45.590081    5102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/kubeconfig: {Name:mkf8cce8f9dace5c4d41967d6ad8df5e03ba53f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:43:45.590268    5102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1006 18:43:45.590301    5102 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 18:43:45.590513    5102 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:43:45.590556    5102 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1006 18:43:45.590660    5102 addons.go:69] Setting yakd=true in profile "addons-442328"
	I1006 18:43:45.590686    5102 addons.go:238] Setting addon yakd=true in "addons-442328"
	I1006 18:43:45.590712    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.591158    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.591340    5102 addons.go:69] Setting inspektor-gadget=true in profile "addons-442328"
	I1006 18:43:45.591365    5102 addons.go:238] Setting addon inspektor-gadget=true in "addons-442328"
	I1006 18:43:45.591401    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.591772    5102 addons.go:69] Setting metrics-server=true in profile "addons-442328"
	I1006 18:43:45.591790    5102 addons.go:238] Setting addon metrics-server=true in "addons-442328"
	I1006 18:43:45.591808    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.592182    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.592638    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.595080    5102 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-442328"
	I1006 18:43:45.595148    5102 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-442328"
	I1006 18:43:45.595198    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.597619    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.598738    5102 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-442328"
	I1006 18:43:45.598769    5102 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-442328"
	I1006 18:43:45.598808    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.599270    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.599735    5102 addons.go:69] Setting registry=true in profile "addons-442328"
	I1006 18:43:45.599755    5102 addons.go:238] Setting addon registry=true in "addons-442328"
	I1006 18:43:45.599785    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.600182    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.607319    5102 addons.go:69] Setting cloud-spanner=true in profile "addons-442328"
	I1006 18:43:45.607337    5102 addons.go:69] Setting registry-creds=true in profile "addons-442328"
	I1006 18:43:45.607365    5102 addons.go:238] Setting addon registry-creds=true in "addons-442328"
	I1006 18:43:45.607368    5102 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-442328"
	I1006 18:43:45.607396    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.607413    5102 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-442328"
	I1006 18:43:45.607435    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.607880    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.607993    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.625577    5102 addons.go:69] Setting storage-provisioner=true in profile "addons-442328"
	I1006 18:43:45.638712    5102 addons.go:238] Setting addon storage-provisioner=true in "addons-442328"
	I1006 18:43:45.638814    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.639450    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.625621    5102 addons.go:69] Setting default-storageclass=true in profile "addons-442328"
	I1006 18:43:45.644299    5102 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-442328"
	I1006 18:43:45.646354    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.625628    5102 addons.go:69] Setting gcp-auth=true in profile "addons-442328"
	I1006 18:43:45.657421    5102 mustload.go:65] Loading cluster: addons-442328
	I1006 18:43:45.657643    5102 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:43:45.657929    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.625639    5102 addons.go:69] Setting ingress=true in profile "addons-442328"
	I1006 18:43:45.663266    5102 addons.go:238] Setting addon ingress=true in "addons-442328"
	I1006 18:43:45.663363    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.664323    5102 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1006 18:43:45.665684    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.667295    5102 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1006 18:43:45.667325    5102 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1006 18:43:45.667383    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:45.625735    5102 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-442328"
	I1006 18:43:45.679872    5102 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-442328"
	I1006 18:43:45.680213    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.625645    5102 addons.go:69] Setting ingress-dns=true in profile "addons-442328"
	I1006 18:43:45.680376    5102 addons.go:238] Setting addon ingress-dns=true in "addons-442328"
	I1006 18:43:45.680409    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.680794    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.625761    5102 addons.go:69] Setting volcano=true in profile "addons-442328"
	I1006 18:43:45.697338    5102 addons.go:238] Setting addon volcano=true in "addons-442328"
	I1006 18:43:45.697379    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.697838    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.625768    5102 addons.go:69] Setting volumesnapshots=true in profile "addons-442328"
	I1006 18:43:45.700401    5102 addons.go:238] Setting addon volumesnapshots=true in "addons-442328"
	I1006 18:43:45.700445    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.700906    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.607351    5102 addons.go:238] Setting addon cloud-spanner=true in "addons-442328"
	I1006 18:43:45.717185    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.717655    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.625822    5102 out.go:179] * Verifying Kubernetes components...
	I1006 18:43:45.725706    5102 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 18:43:45.776505    5102 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1006 18:43:45.784951    5102 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1006 18:43:45.785450    5102 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1006 18:43:45.785726    5102 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1006 18:43:45.785814    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1006 18:43:45.785906    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:45.830087    5102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1006 18:43:45.843329    5102 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1006 18:43:45.843353    5102 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1006 18:43:45.843421    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:45.847816    5102 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1006 18:43:45.848374    5102 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1006 18:43:45.848393    5102 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1006 18:43:45.848463    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:45.863745    5102 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1006 18:43:45.871277    5102 out.go:179]   - Using image docker.io/registry:3.0.0
	I1006 18:43:45.875875    5102 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1006 18:43:45.875902    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1006 18:43:45.875975    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:45.889559    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.892932    5102 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1006 18:43:45.893003    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1006 18:43:45.893090    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:45.905771    5102 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1006 18:43:45.910310    5102 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1006 18:43:45.910335    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1006 18:43:45.910406    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:45.911362    5102 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1006 18:43:45.914956    5102 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1006 18:43:45.918432    5102 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1006 18:43:45.919463    5102 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1006 18:43:45.920390    5102 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-442328"
	I1006 18:43:45.920429    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.920870    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.932600    5102 addons.go:238] Setting addon default-storageclass=true in "addons-442328"
	I1006 18:43:45.932641    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.933060    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.939919    5102 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1006 18:43:45.939946    5102 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1006 18:43:45.940017    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:45.940818    5102 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1006 18:43:45.940837    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1006 18:43:45.940902    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:45.956224    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:45.963918    5102 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1006 18:43:45.964139    5102 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	W1006 18:43:45.965331    5102 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1006 18:43:45.965554    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:45.969284    5102 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1006 18:43:45.976997    5102 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1006 18:43:45.977032    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1006 18:43:45.977110    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:45.978579    5102 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 18:43:45.981851    5102 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1006 18:43:45.986989    5102 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1006 18:43:45.987114    5102 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1006 18:43:45.987203    5102 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 18:43:45.987495    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 18:43:45.987607    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:46.029745    5102 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1006 18:43:46.035409    5102 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1006 18:43:46.036172    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:46.038711    5102 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1006 18:43:46.042205    5102 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1006 18:43:46.042234    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1006 18:43:46.042304    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:46.043797    5102 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1006 18:43:46.055879    5102 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1006 18:43:46.055921    5102 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1006 18:43:46.055993    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:46.093911    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:46.100684    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:46.103863    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:46.119498    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:46.148062    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:46.173201    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:46.195766    5102 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1006 18:43:46.199445    5102 out.go:179]   - Using image docker.io/busybox:stable
	I1006 18:43:46.203531    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:46.204635    5102 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1006 18:43:46.204653    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1006 18:43:46.204708    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:46.209785    5102 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 18:43:46.209806    5102 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 18:43:46.209870    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:46.228787    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:46.234246    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:46.235113    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	W1006 18:43:46.236087    5102 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1006 18:43:46.236131    5102 retry.go:31] will retry after 148.861145ms: ssh: handshake failed: EOF
	W1006 18:43:46.236411    5102 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1006 18:43:46.236429    5102 retry.go:31] will retry after 261.10572ms: ssh: handshake failed: EOF
	I1006 18:43:46.276141    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:46.285465    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	W1006 18:43:46.286473    5102 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1006 18:43:46.286493    5102 retry.go:31] will retry after 156.887178ms: ssh: handshake failed: EOF
	I1006 18:43:46.358051    5102 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1006 18:43:46.502629    5102 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1006 18:43:46.502668    5102 retry.go:31] will retry after 375.579891ms: ssh: handshake failed: EOF
	I1006 18:43:46.659450    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1006 18:43:46.679187    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1006 18:43:46.724568    5102 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1006 18:43:46.724639    5102 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1006 18:43:46.726584    5102 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1006 18:43:46.726637    5102 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1006 18:43:46.733146    5102 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1006 18:43:46.733216    5102 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1006 18:43:46.761747    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1006 18:43:46.762612    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1006 18:43:46.778084    5102 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1006 18:43:46.778156    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1006 18:43:46.862817    5102 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1006 18:43:46.862890    5102 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1006 18:43:46.877049    5102 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1006 18:43:46.877119    5102 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1006 18:43:46.912718    5102 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1006 18:43:46.912787    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1006 18:43:46.944858    5102 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1006 18:43:46.944943    5102 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1006 18:43:46.956437    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1006 18:43:46.990310    5102 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1006 18:43:46.990385    5102 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1006 18:43:46.993228    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1006 18:43:47.041050    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 18:43:47.096645    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 18:43:47.105553    5102 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1006 18:43:47.105573    5102 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1006 18:43:47.145475    5102 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1006 18:43:47.145500    5102 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1006 18:43:47.150482    5102 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1006 18:43:47.150510    5102 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1006 18:43:47.156553    5102 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1006 18:43:47.156577    5102 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1006 18:43:47.239441    5102 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1006 18:43:47.239464    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1006 18:43:47.249813    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 18:43:47.280593    5102 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.450403399s)
	I1006 18:43:47.280623    5102 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1006 18:43:47.282121    5102 node_ready.go:35] waiting up to 6m0s for node "addons-442328" to be "Ready" ...
	I1006 18:43:47.342227    5102 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1006 18:43:47.342253    5102 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1006 18:43:47.369128    5102 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1006 18:43:47.369153    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1006 18:43:47.376864    5102 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1006 18:43:47.376899    5102 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1006 18:43:47.447119    5102 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1006 18:43:47.447147    5102 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1006 18:43:47.523772    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1006 18:43:47.553042    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1006 18:43:47.594675    5102 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1006 18:43:47.594699    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1006 18:43:47.611603    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1006 18:43:47.651355    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1006 18:43:47.721574    5102 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1006 18:43:47.721599    5102 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1006 18:43:47.786283    5102 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-442328" context rescaled to 1 replicas
	I1006 18:43:47.870408    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.210864872s)
	I1006 18:43:47.906707    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1006 18:43:47.996799    5102 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1006 18:43:47.996866    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1006 18:43:48.222958    5102 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1006 18:43:48.223028    5102 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1006 18:43:48.259438    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.580181066s)
	I1006 18:43:48.412528    5102 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1006 18:43:48.412552    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1006 18:43:48.597772    5102 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1006 18:43:48.597797    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1006 18:43:48.612755    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.850072685s)
	I1006 18:43:48.612945    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.85113011s)
	I1006 18:43:48.771628    5102 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1006 18:43:48.771658    5102 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1006 18:43:49.039812    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1006 18:43:49.296273    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:43:50.172198    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.215677968s)
	I1006 18:43:50.735834    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.742529919s)
	I1006 18:43:50.736026    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.694947149s)
	W1006 18:43:50.736055    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:43:50.736071    5102 retry.go:31] will retry after 345.519101ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:43:50.736088    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.639419083s)
	I1006 18:43:50.736117    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.486278777s)
	W1006 18:43:50.816157    5102 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1006 18:43:51.082412    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1006 18:43:51.350766    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:43:52.121008    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.597206395s)
	I1006 18:43:52.121109    5102 addons.go:479] Verifying addon ingress=true in "addons-442328"
	I1006 18:43:52.121323    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.568250098s)
	I1006 18:43:52.121381    5102 addons.go:479] Verifying addon registry=true in "addons-442328"
	I1006 18:43:52.121723    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.47033587s)
	I1006 18:43:52.121847    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.510209141s)
	I1006 18:43:52.122446    5102 addons.go:479] Verifying addon metrics-server=true in "addons-442328"
	I1006 18:43:52.121873    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.215091427s)
	W1006 18:43:52.122492    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1006 18:43:52.122511    5102 retry.go:31] will retry after 329.20649ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1006 18:43:52.124693    5102 out.go:179] * Verifying ingress addon...
	I1006 18:43:52.126730    5102 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-442328 service yakd-dashboard -n yakd-dashboard
	
	I1006 18:43:52.126739    5102 out.go:179] * Verifying registry addon...
	I1006 18:43:52.129551    5102 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1006 18:43:52.130717    5102 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1006 18:43:52.158574    5102 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1006 18:43:52.158606    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:52.160154    5102 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1006 18:43:52.160182    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:52.452417    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1006 18:43:52.546928    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.507072008s)
	I1006 18:43:52.547043    5102 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-442328"
	I1006 18:43:52.547011    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.464562756s)
	W1006 18:43:52.547461    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:43:52.547577    5102 retry.go:31] will retry after 524.476926ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:43:52.550068    5102 out.go:179] * Verifying csi-hostpath-driver addon...
	I1006 18:43:52.554595    5102 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1006 18:43:52.573613    5102 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1006 18:43:52.573647    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:43:52.674205    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:52.674773    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:53.058039    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:43:53.073207    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 18:43:53.159463    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:53.159884    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:53.502844    5102 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1006 18:43:53.503034    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:53.528429    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:53.558304    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:43:53.633393    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:53.638555    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:53.654598    5102 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1006 18:43:53.668795    5102 addons.go:238] Setting addon gcp-auth=true in "addons-442328"
	I1006 18:43:53.668896    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:53.669403    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:53.693710    5102 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1006 18:43:53.693762    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:53.730526    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	W1006 18:43:53.785419    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:43:54.058971    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:43:54.132903    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:54.138765    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:54.557979    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:43:54.632811    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:54.638232    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:55.060157    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:43:55.132976    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:55.138610    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:55.157919    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.705448821s)
	I1006 18:43:55.157957    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.084724366s)
	I1006 18:43:55.158001    5102 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.46426913s)
	W1006 18:43:55.158136    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:43:55.158157    5102 retry.go:31] will retry after 626.362992ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:43:55.160989    5102 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1006 18:43:55.163858    5102 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1006 18:43:55.166808    5102 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1006 18:43:55.166836    5102 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1006 18:43:55.180220    5102 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1006 18:43:55.180244    5102 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1006 18:43:55.194495    5102 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1006 18:43:55.194520    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1006 18:43:55.207926    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1006 18:43:55.564440    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:43:55.642519    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:55.646664    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:55.702588    5102 addons.go:479] Verifying addon gcp-auth=true in "addons-442328"
	I1006 18:43:55.705698    5102 out.go:179] * Verifying gcp-auth addon...
	I1006 18:43:55.709239    5102 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1006 18:43:55.712125    5102 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1006 18:43:55.712189    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:43:55.784959    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1006 18:43:55.786009    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:43:56.058587    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:43:56.134246    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:56.139102    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:56.213095    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:43:56.559001    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1006 18:43:56.606665    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:43:56.606743    5102 retry.go:31] will retry after 832.446037ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:43:56.632606    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:56.639370    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:56.711889    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:43:57.057901    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:43:57.132851    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:57.138689    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:57.212349    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:43:57.439896    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 18:43:57.558327    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:43:57.634359    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:57.640073    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:57.712809    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:43:57.786506    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:43:58.059590    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:43:58.133444    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:58.140266    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:58.212469    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:43:58.247228    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:43:58.247256    5102 retry.go:31] will retry after 1.041225751s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:43:58.558097    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:43:58.633090    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:58.638838    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:58.712915    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:43:59.058924    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:43:59.133295    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:59.138962    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:59.213036    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:43:59.288937    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 18:43:59.558272    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:43:59.634190    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:59.640463    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:59.713209    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:43:59.792724    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:44:00.094355    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:00.142717    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:00.155815    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:00.213289    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:00.314746    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.025767769s)
	W1006 18:44:00.314862    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:44:00.315819    5102 retry.go:31] will retry after 2.820328663s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:44:00.559421    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:00.634838    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:00.659765    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:00.713210    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:01.058564    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:01.132409    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:01.139967    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:01.212941    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:01.557689    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:01.632750    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:01.639301    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:01.713303    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:02.058446    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:02.133661    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:02.139198    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:02.212893    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:44:02.285837    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:44:02.558007    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:02.633133    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:02.638653    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:02.712404    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:03.058274    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:03.133888    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:03.136968    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 18:44:03.140127    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:03.218525    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:03.558228    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:03.633157    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:03.638952    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:03.712753    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:44:03.961811    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:44:03.961848    5102 retry.go:31] will retry after 1.956913032s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:44:04.058602    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:04.133717    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:04.139302    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:04.211962    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:04.557683    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:04.632676    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:04.639430    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:04.712287    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:44:04.785663    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:44:05.058645    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:05.132772    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:05.139335    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:05.212949    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:05.557865    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:05.632668    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:05.639615    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:05.712442    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:05.918968    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 18:44:06.058439    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:06.133299    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:06.138822    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:06.212814    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:06.558474    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:06.633373    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:06.639302    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:06.712867    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:44:06.750581    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:44:06.750658    5102 retry.go:31] will retry after 4.301628283s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 18:44:06.785775    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:44:07.057945    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:07.133163    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:07.138818    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:07.212856    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:07.557501    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:07.633187    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:07.638878    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:07.712654    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:08.058232    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:08.133196    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:08.139019    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:08.213402    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:08.559030    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:08.633130    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:08.638651    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:08.712530    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:44:08.790906    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:44:09.058260    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:09.133020    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:09.139130    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:09.212312    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:09.557890    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:09.632913    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:09.638569    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:09.712651    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:10.057908    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:10.133529    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:10.139035    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:10.213002    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:10.558239    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:10.637024    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:10.641948    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:10.712960    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:11.053226    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 18:44:11.063237    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:11.133428    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:11.139192    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:11.212951    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:44:11.285957    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:44:11.558142    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:11.632931    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:11.639300    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:11.713036    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:44:11.867012    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:44:11.867044    5102 retry.go:31] will retry after 4.682245078s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:44:12.058169    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:12.133432    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:12.139622    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:12.212422    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:12.558069    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:12.633198    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:12.638660    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:12.712666    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:13.057956    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:13.132802    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:13.139554    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:13.212207    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:13.557849    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:13.632936    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:13.639452    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:13.712335    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:44:13.785887    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:44:14.058297    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:14.133154    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:14.138900    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:14.213286    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:14.558202    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:14.633222    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:14.638891    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:14.712696    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:15.058866    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:15.133232    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:15.139012    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:15.213083    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:15.558045    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:15.633240    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:15.639013    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:15.712968    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:44:15.786414    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:44:16.058136    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:16.132927    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:16.138497    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:16.212271    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:16.550423    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 18:44:16.557821    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:16.633466    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:16.639113    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:16.713021    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:17.058320    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:17.133687    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:17.139389    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:17.212773    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:44:17.381288    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:44:17.381320    5102 retry.go:31] will retry after 9.740361518s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:44:17.558075    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:17.633000    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:17.638617    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:17.712543    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:18.058308    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:18.133652    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:18.139571    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:18.212455    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:44:18.285274    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:44:18.558548    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:18.633653    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:18.638900    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:18.712787    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:19.058323    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:19.132860    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:19.138549    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:19.212624    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:19.557404    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:19.633411    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:19.638924    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:19.712784    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:20.057766    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:20.132730    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:20.139507    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:20.212190    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:44:20.285887    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:44:20.558176    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:20.633170    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:20.638932    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:20.712947    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:21.058363    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:21.133107    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:21.139351    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:21.213121    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:21.558229    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:21.633233    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:21.639073    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:21.712923    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:22.057768    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:22.132785    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:22.139044    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:22.213162    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:22.559301    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:22.633672    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:22.639344    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:22.712100    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:44:22.785851    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:44:23.057949    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:23.132715    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:23.139372    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:23.213314    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:23.558378    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:23.633655    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:23.639150    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:23.713113    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:24.058411    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:24.133683    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:24.139250    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:24.211996    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:24.558047    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:24.633185    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:24.638866    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:24.712841    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:44:24.788842    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:44:25.058082    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:25.133005    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:25.138846    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:25.213404    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:25.558023    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:25.633571    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:25.639028    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:25.712907    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:26.058710    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:26.132637    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:26.139313    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:26.212345    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:26.557840    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:26.632848    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:26.639370    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:26.750156    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:26.805706    5102 node_ready.go:49] node "addons-442328" is "Ready"
	I1006 18:44:26.805735    5102 node_ready.go:38] duration metric: took 39.52358282s for node "addons-442328" to be "Ready" ...
	I1006 18:44:26.805749    5102 api_server.go:52] waiting for apiserver process to appear ...
	I1006 18:44:26.805825    5102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 18:44:26.835913    5102 api_server.go:72] duration metric: took 41.245582728s to wait for apiserver process to appear ...
	I1006 18:44:26.835939    5102 api_server.go:88] waiting for apiserver healthz status ...
	I1006 18:44:26.835958    5102 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1006 18:44:26.854686    5102 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1006 18:44:26.856831    5102 api_server.go:141] control plane version: v1.34.1
	I1006 18:44:26.856862    5102 api_server.go:131] duration metric: took 20.915555ms to wait for apiserver health ...
	I1006 18:44:26.856871    5102 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 18:44:26.870158    5102 system_pods.go:59] 19 kube-system pods found
	I1006 18:44:26.870201    5102 system_pods.go:61] "coredns-66bc5c9577-bx5cf" [779ed2d3-fc88-4853-ac93-7a56e62a0190] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 18:44:26.870209    5102 system_pods.go:61] "csi-hostpath-attacher-0" [d762fc6a-25ef-424f-b865-914e712ed260] Pending
	I1006 18:44:26.870216    5102 system_pods.go:61] "csi-hostpath-resizer-0" [f5c6a57b-496a-40d3-a7da-cae95047a0db] Pending
	I1006 18:44:26.870222    5102 system_pods.go:61] "csi-hostpathplugin-g7kvd" [db381122-217d-4859-9d86-d7d29a692af5] Pending
	I1006 18:44:26.870226    5102 system_pods.go:61] "etcd-addons-442328" [8562fac7-6c63-42c3-98bb-4c599ba17505] Running
	I1006 18:44:26.870230    5102 system_pods.go:61] "kindnet-g2tkh" [1e0c713a-0110-4300-9fe7-8834abac0a34] Running
	I1006 18:44:26.870235    5102 system_pods.go:61] "kube-apiserver-addons-442328" [5b6af8a1-10f7-478e-b97d-27e19e5f2469] Running
	I1006 18:44:26.870239    5102 system_pods.go:61] "kube-controller-manager-addons-442328" [0d008802-4fa7-4ef6-81d3-f391d8b8488f] Running
	I1006 18:44:26.870244    5102 system_pods.go:61] "kube-ingress-dns-minikube" [737e5bad-1cf7-448f-acf2-93e38e152f16] Pending
	I1006 18:44:26.870251    5102 system_pods.go:61] "kube-proxy-n686b" [68644dfa-f7a5-42aa-938f-b928268e97f0] Running
	I1006 18:44:26.870256    5102 system_pods.go:61] "kube-scheduler-addons-442328" [fafbbc55-3ea9-4805-8bc2-0a365282ee86] Running
	I1006 18:44:26.870267    5102 system_pods.go:61] "metrics-server-85b7d694d7-swsbd" [d28f1e31-8b2f-42c5-b7a2-a1662d0c5412] Pending
	I1006 18:44:26.870271    5102 system_pods.go:61] "nvidia-device-plugin-daemonset-2ptdw" [9a959e36-17ef-49b2-8c50-36b639bbbbf7] Pending
	I1006 18:44:26.870275    5102 system_pods.go:61] "registry-66898fdd98-k4cb6" [0b43b208-864a-4999-a910-ca608e917e81] Pending
	I1006 18:44:26.870288    5102 system_pods.go:61] "registry-creds-764b6fb674-pgnhk" [7ae00ec6-941d-4df4-a11f-08481b31d714] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1006 18:44:26.870293    5102 system_pods.go:61] "registry-proxy-zqdsp" [b9743061-9fde-4f5c-aa4f-c77616b5dfa7] Pending
	I1006 18:44:26.870303    5102 system_pods.go:61] "snapshot-controller-7d9fbc56b8-glwdc" [1ac758b5-014d-4f6e-a68a-a653a4c90550] Pending
	I1006 18:44:26.870308    5102 system_pods.go:61] "snapshot-controller-7d9fbc56b8-jg8hf" [b5a9bdde-a4d4-442a-9cee-f8cbc46520d2] Pending
	I1006 18:44:26.870312    5102 system_pods.go:61] "storage-provisioner" [63c4de3f-fd5a-4fb8-ba9f-4e238f7541f3] Pending
	I1006 18:44:26.870318    5102 system_pods.go:74] duration metric: took 13.441294ms to wait for pod list to return data ...
	I1006 18:44:26.870329    5102 default_sa.go:34] waiting for default service account to be created ...
	I1006 18:44:26.883957    5102 default_sa.go:45] found service account: "default"
	I1006 18:44:26.883984    5102 default_sa.go:55] duration metric: took 13.648168ms for default service account to be created ...
	I1006 18:44:26.883995    5102 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 18:44:26.908285    5102 system_pods.go:86] 19 kube-system pods found
	I1006 18:44:26.908318    5102 system_pods.go:89] "coredns-66bc5c9577-bx5cf" [779ed2d3-fc88-4853-ac93-7a56e62a0190] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 18:44:26.908326    5102 system_pods.go:89] "csi-hostpath-attacher-0" [d762fc6a-25ef-424f-b865-914e712ed260] Pending
	I1006 18:44:26.908331    5102 system_pods.go:89] "csi-hostpath-resizer-0" [f5c6a57b-496a-40d3-a7da-cae95047a0db] Pending
	I1006 18:44:26.908335    5102 system_pods.go:89] "csi-hostpathplugin-g7kvd" [db381122-217d-4859-9d86-d7d29a692af5] Pending
	I1006 18:44:26.908339    5102 system_pods.go:89] "etcd-addons-442328" [8562fac7-6c63-42c3-98bb-4c599ba17505] Running
	I1006 18:44:26.908344    5102 system_pods.go:89] "kindnet-g2tkh" [1e0c713a-0110-4300-9fe7-8834abac0a34] Running
	I1006 18:44:26.908348    5102 system_pods.go:89] "kube-apiserver-addons-442328" [5b6af8a1-10f7-478e-b97d-27e19e5f2469] Running
	I1006 18:44:26.908357    5102 system_pods.go:89] "kube-controller-manager-addons-442328" [0d008802-4fa7-4ef6-81d3-f391d8b8488f] Running
	I1006 18:44:26.908364    5102 system_pods.go:89] "kube-ingress-dns-minikube" [737e5bad-1cf7-448f-acf2-93e38e152f16] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1006 18:44:26.908379    5102 system_pods.go:89] "kube-proxy-n686b" [68644dfa-f7a5-42aa-938f-b928268e97f0] Running
	I1006 18:44:26.908383    5102 system_pods.go:89] "kube-scheduler-addons-442328" [fafbbc55-3ea9-4805-8bc2-0a365282ee86] Running
	I1006 18:44:26.908390    5102 system_pods.go:89] "metrics-server-85b7d694d7-swsbd" [d28f1e31-8b2f-42c5-b7a2-a1662d0c5412] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1006 18:44:26.908398    5102 system_pods.go:89] "nvidia-device-plugin-daemonset-2ptdw" [9a959e36-17ef-49b2-8c50-36b639bbbbf7] Pending
	I1006 18:44:26.908403    5102 system_pods.go:89] "registry-66898fdd98-k4cb6" [0b43b208-864a-4999-a910-ca608e917e81] Pending
	I1006 18:44:26.908409    5102 system_pods.go:89] "registry-creds-764b6fb674-pgnhk" [7ae00ec6-941d-4df4-a11f-08481b31d714] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1006 18:44:26.908425    5102 system_pods.go:89] "registry-proxy-zqdsp" [b9743061-9fde-4f5c-aa4f-c77616b5dfa7] Pending
	I1006 18:44:26.908432    5102 system_pods.go:89] "snapshot-controller-7d9fbc56b8-glwdc" [1ac758b5-014d-4f6e-a68a-a653a4c90550] Pending
	I1006 18:44:26.908437    5102 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jg8hf" [b5a9bdde-a4d4-442a-9cee-f8cbc46520d2] Pending
	I1006 18:44:26.908443    5102 system_pods.go:89] "storage-provisioner" [63c4de3f-fd5a-4fb8-ba9f-4e238f7541f3] Pending
	I1006 18:44:26.908457    5102 retry.go:31] will retry after 307.182962ms: missing components: kube-dns
	I1006 18:44:27.122670    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 18:44:27.122964    5102 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1006 18:44:27.122982    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:27.140681    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:27.145136    5102 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1006 18:44:27.145161    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:27.219499    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:27.225464    5102 system_pods.go:86] 19 kube-system pods found
	I1006 18:44:27.225505    5102 system_pods.go:89] "coredns-66bc5c9577-bx5cf" [779ed2d3-fc88-4853-ac93-7a56e62a0190] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 18:44:27.225518    5102 system_pods.go:89] "csi-hostpath-attacher-0" [d762fc6a-25ef-424f-b865-914e712ed260] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1006 18:44:27.225525    5102 system_pods.go:89] "csi-hostpath-resizer-0" [f5c6a57b-496a-40d3-a7da-cae95047a0db] Pending
	I1006 18:44:27.225539    5102 system_pods.go:89] "csi-hostpathplugin-g7kvd" [db381122-217d-4859-9d86-d7d29a692af5] Pending
	I1006 18:44:27.225543    5102 system_pods.go:89] "etcd-addons-442328" [8562fac7-6c63-42c3-98bb-4c599ba17505] Running
	I1006 18:44:27.225553    5102 system_pods.go:89] "kindnet-g2tkh" [1e0c713a-0110-4300-9fe7-8834abac0a34] Running
	I1006 18:44:27.225560    5102 system_pods.go:89] "kube-apiserver-addons-442328" [5b6af8a1-10f7-478e-b97d-27e19e5f2469] Running
	I1006 18:44:27.225565    5102 system_pods.go:89] "kube-controller-manager-addons-442328" [0d008802-4fa7-4ef6-81d3-f391d8b8488f] Running
	I1006 18:44:27.225575    5102 system_pods.go:89] "kube-ingress-dns-minikube" [737e5bad-1cf7-448f-acf2-93e38e152f16] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1006 18:44:27.225584    5102 system_pods.go:89] "kube-proxy-n686b" [68644dfa-f7a5-42aa-938f-b928268e97f0] Running
	I1006 18:44:27.225594    5102 system_pods.go:89] "kube-scheduler-addons-442328" [fafbbc55-3ea9-4805-8bc2-0a365282ee86] Running
	I1006 18:44:27.225601    5102 system_pods.go:89] "metrics-server-85b7d694d7-swsbd" [d28f1e31-8b2f-42c5-b7a2-a1662d0c5412] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1006 18:44:27.225613    5102 system_pods.go:89] "nvidia-device-plugin-daemonset-2ptdw" [9a959e36-17ef-49b2-8c50-36b639bbbbf7] Pending
	I1006 18:44:27.225624    5102 system_pods.go:89] "registry-66898fdd98-k4cb6" [0b43b208-864a-4999-a910-ca608e917e81] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1006 18:44:27.225634    5102 system_pods.go:89] "registry-creds-764b6fb674-pgnhk" [7ae00ec6-941d-4df4-a11f-08481b31d714] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1006 18:44:27.225639    5102 system_pods.go:89] "registry-proxy-zqdsp" [b9743061-9fde-4f5c-aa4f-c77616b5dfa7] Pending
	I1006 18:44:27.225650    5102 system_pods.go:89] "snapshot-controller-7d9fbc56b8-glwdc" [1ac758b5-014d-4f6e-a68a-a653a4c90550] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 18:44:27.225661    5102 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jg8hf" [b5a9bdde-a4d4-442a-9cee-f8cbc46520d2] Pending
	I1006 18:44:27.225665    5102 system_pods.go:89] "storage-provisioner" [63c4de3f-fd5a-4fb8-ba9f-4e238f7541f3] Pending
	I1006 18:44:27.225679    5102 retry.go:31] will retry after 390.029892ms: missing components: kube-dns
	I1006 18:44:27.566852    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:27.629912    5102 system_pods.go:86] 19 kube-system pods found
	I1006 18:44:27.629951    5102 system_pods.go:89] "coredns-66bc5c9577-bx5cf" [779ed2d3-fc88-4853-ac93-7a56e62a0190] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 18:44:27.629960    5102 system_pods.go:89] "csi-hostpath-attacher-0" [d762fc6a-25ef-424f-b865-914e712ed260] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1006 18:44:27.629967    5102 system_pods.go:89] "csi-hostpath-resizer-0" [f5c6a57b-496a-40d3-a7da-cae95047a0db] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1006 18:44:27.629974    5102 system_pods.go:89] "csi-hostpathplugin-g7kvd" [db381122-217d-4859-9d86-d7d29a692af5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1006 18:44:27.629982    5102 system_pods.go:89] "etcd-addons-442328" [8562fac7-6c63-42c3-98bb-4c599ba17505] Running
	I1006 18:44:27.629998    5102 system_pods.go:89] "kindnet-g2tkh" [1e0c713a-0110-4300-9fe7-8834abac0a34] Running
	I1006 18:44:27.630003    5102 system_pods.go:89] "kube-apiserver-addons-442328" [5b6af8a1-10f7-478e-b97d-27e19e5f2469] Running
	I1006 18:44:27.630008    5102 system_pods.go:89] "kube-controller-manager-addons-442328" [0d008802-4fa7-4ef6-81d3-f391d8b8488f] Running
	I1006 18:44:27.630014    5102 system_pods.go:89] "kube-ingress-dns-minikube" [737e5bad-1cf7-448f-acf2-93e38e152f16] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1006 18:44:27.630018    5102 system_pods.go:89] "kube-proxy-n686b" [68644dfa-f7a5-42aa-938f-b928268e97f0] Running
	I1006 18:44:27.630029    5102 system_pods.go:89] "kube-scheduler-addons-442328" [fafbbc55-3ea9-4805-8bc2-0a365282ee86] Running
	I1006 18:44:27.630036    5102 system_pods.go:89] "metrics-server-85b7d694d7-swsbd" [d28f1e31-8b2f-42c5-b7a2-a1662d0c5412] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1006 18:44:27.630049    5102 system_pods.go:89] "nvidia-device-plugin-daemonset-2ptdw" [9a959e36-17ef-49b2-8c50-36b639bbbbf7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1006 18:44:27.630055    5102 system_pods.go:89] "registry-66898fdd98-k4cb6" [0b43b208-864a-4999-a910-ca608e917e81] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1006 18:44:27.630066    5102 system_pods.go:89] "registry-creds-764b6fb674-pgnhk" [7ae00ec6-941d-4df4-a11f-08481b31d714] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1006 18:44:27.630072    5102 system_pods.go:89] "registry-proxy-zqdsp" [b9743061-9fde-4f5c-aa4f-c77616b5dfa7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1006 18:44:27.630078    5102 system_pods.go:89] "snapshot-controller-7d9fbc56b8-glwdc" [1ac758b5-014d-4f6e-a68a-a653a4c90550] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 18:44:27.630086    5102 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jg8hf" [b5a9bdde-a4d4-442a-9cee-f8cbc46520d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 18:44:27.630091    5102 system_pods.go:89] "storage-provisioner" [63c4de3f-fd5a-4fb8-ba9f-4e238f7541f3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 18:44:27.630107    5102 retry.go:31] will retry after 361.124555ms: missing components: kube-dns
	I1006 18:44:27.636744    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:27.643271    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:27.743423    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:28.002465    5102 system_pods.go:86] 19 kube-system pods found
	I1006 18:44:28.002503    5102 system_pods.go:89] "coredns-66bc5c9577-bx5cf" [779ed2d3-fc88-4853-ac93-7a56e62a0190] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 18:44:28.002512    5102 system_pods.go:89] "csi-hostpath-attacher-0" [d762fc6a-25ef-424f-b865-914e712ed260] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1006 18:44:28.002520    5102 system_pods.go:89] "csi-hostpath-resizer-0" [f5c6a57b-496a-40d3-a7da-cae95047a0db] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1006 18:44:28.002528    5102 system_pods.go:89] "csi-hostpathplugin-g7kvd" [db381122-217d-4859-9d86-d7d29a692af5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1006 18:44:28.002532    5102 system_pods.go:89] "etcd-addons-442328" [8562fac7-6c63-42c3-98bb-4c599ba17505] Running
	I1006 18:44:28.002540    5102 system_pods.go:89] "kindnet-g2tkh" [1e0c713a-0110-4300-9fe7-8834abac0a34] Running
	I1006 18:44:28.002549    5102 system_pods.go:89] "kube-apiserver-addons-442328" [5b6af8a1-10f7-478e-b97d-27e19e5f2469] Running
	I1006 18:44:28.002553    5102 system_pods.go:89] "kube-controller-manager-addons-442328" [0d008802-4fa7-4ef6-81d3-f391d8b8488f] Running
	I1006 18:44:28.002564    5102 system_pods.go:89] "kube-ingress-dns-minikube" [737e5bad-1cf7-448f-acf2-93e38e152f16] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1006 18:44:28.002568    5102 system_pods.go:89] "kube-proxy-n686b" [68644dfa-f7a5-42aa-938f-b928268e97f0] Running
	I1006 18:44:28.002573    5102 system_pods.go:89] "kube-scheduler-addons-442328" [fafbbc55-3ea9-4805-8bc2-0a365282ee86] Running
	I1006 18:44:28.002586    5102 system_pods.go:89] "metrics-server-85b7d694d7-swsbd" [d28f1e31-8b2f-42c5-b7a2-a1662d0c5412] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1006 18:44:28.002593    5102 system_pods.go:89] "nvidia-device-plugin-daemonset-2ptdw" [9a959e36-17ef-49b2-8c50-36b639bbbbf7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1006 18:44:28.002600    5102 system_pods.go:89] "registry-66898fdd98-k4cb6" [0b43b208-864a-4999-a910-ca608e917e81] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1006 18:44:28.002610    5102 system_pods.go:89] "registry-creds-764b6fb674-pgnhk" [7ae00ec6-941d-4df4-a11f-08481b31d714] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1006 18:44:28.002616    5102 system_pods.go:89] "registry-proxy-zqdsp" [b9743061-9fde-4f5c-aa4f-c77616b5dfa7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1006 18:44:28.002626    5102 system_pods.go:89] "snapshot-controller-7d9fbc56b8-glwdc" [1ac758b5-014d-4f6e-a68a-a653a4c90550] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 18:44:28.002645    5102 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jg8hf" [b5a9bdde-a4d4-442a-9cee-f8cbc46520d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 18:44:28.002671    5102 system_pods.go:89] "storage-provisioner" [63c4de3f-fd5a-4fb8-ba9f-4e238f7541f3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 18:44:28.002686    5102 retry.go:31] will retry after 463.661369ms: missing components: kube-dns
	I1006 18:44:28.097200    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:28.197384    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:28.197637    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:28.214533    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:28.472607    5102 system_pods.go:86] 19 kube-system pods found
	I1006 18:44:28.472693    5102 system_pods.go:89] "coredns-66bc5c9577-bx5cf" [779ed2d3-fc88-4853-ac93-7a56e62a0190] Running
	I1006 18:44:28.472720    5102 system_pods.go:89] "csi-hostpath-attacher-0" [d762fc6a-25ef-424f-b865-914e712ed260] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1006 18:44:28.472740    5102 system_pods.go:89] "csi-hostpath-resizer-0" [f5c6a57b-496a-40d3-a7da-cae95047a0db] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1006 18:44:28.472772    5102 system_pods.go:89] "csi-hostpathplugin-g7kvd" [db381122-217d-4859-9d86-d7d29a692af5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1006 18:44:28.472792    5102 system_pods.go:89] "etcd-addons-442328" [8562fac7-6c63-42c3-98bb-4c599ba17505] Running
	I1006 18:44:28.472813    5102 system_pods.go:89] "kindnet-g2tkh" [1e0c713a-0110-4300-9fe7-8834abac0a34] Running
	I1006 18:44:28.472845    5102 system_pods.go:89] "kube-apiserver-addons-442328" [5b6af8a1-10f7-478e-b97d-27e19e5f2469] Running
	I1006 18:44:28.472864    5102 system_pods.go:89] "kube-controller-manager-addons-442328" [0d008802-4fa7-4ef6-81d3-f391d8b8488f] Running
	I1006 18:44:28.472886    5102 system_pods.go:89] "kube-ingress-dns-minikube" [737e5bad-1cf7-448f-acf2-93e38e152f16] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1006 18:44:28.472903    5102 system_pods.go:89] "kube-proxy-n686b" [68644dfa-f7a5-42aa-938f-b928268e97f0] Running
	I1006 18:44:28.472923    5102 system_pods.go:89] "kube-scheduler-addons-442328" [fafbbc55-3ea9-4805-8bc2-0a365282ee86] Running
	I1006 18:44:28.472941    5102 system_pods.go:89] "metrics-server-85b7d694d7-swsbd" [d28f1e31-8b2f-42c5-b7a2-a1662d0c5412] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1006 18:44:28.472972    5102 system_pods.go:89] "nvidia-device-plugin-daemonset-2ptdw" [9a959e36-17ef-49b2-8c50-36b639bbbbf7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1006 18:44:28.472990    5102 system_pods.go:89] "registry-66898fdd98-k4cb6" [0b43b208-864a-4999-a910-ca608e917e81] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1006 18:44:28.473020    5102 system_pods.go:89] "registry-creds-764b6fb674-pgnhk" [7ae00ec6-941d-4df4-a11f-08481b31d714] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1006 18:44:28.473041    5102 system_pods.go:89] "registry-proxy-zqdsp" [b9743061-9fde-4f5c-aa4f-c77616b5dfa7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1006 18:44:28.473063    5102 system_pods.go:89] "snapshot-controller-7d9fbc56b8-glwdc" [1ac758b5-014d-4f6e-a68a-a653a4c90550] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 18:44:28.473085    5102 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jg8hf" [b5a9bdde-a4d4-442a-9cee-f8cbc46520d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 18:44:28.473103    5102 system_pods.go:89] "storage-provisioner" [63c4de3f-fd5a-4fb8-ba9f-4e238f7541f3] Running
	I1006 18:44:28.473126    5102 system_pods.go:126] duration metric: took 1.589124541s to wait for k8s-apps to be running ...
	I1006 18:44:28.473145    5102 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 18:44:28.473216    5102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 18:44:28.571350    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:28.633630    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:28.639585    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:28.712586    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:28.878770    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.756061368s)
	W1006 18:44:28.878818    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:44:28.879003    5102 retry.go:31] will retry after 17.384698944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:44:28.878899    5102 system_svc.go:56] duration metric: took 405.738665ms WaitForService to wait for kubelet
	I1006 18:44:28.879055    5102 kubeadm.go:586] duration metric: took 43.288711831s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 18:44:28.879081    5102 node_conditions.go:102] verifying NodePressure condition ...
	I1006 18:44:28.882100    5102 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 18:44:28.882145    5102 node_conditions.go:123] node cpu capacity is 2
	I1006 18:44:28.882159    5102 node_conditions.go:105] duration metric: took 3.071991ms to run NodePressure ...
	I1006 18:44:28.882212    5102 start.go:241] waiting for startup goroutines ...
	I1006 18:44:29.058736    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:29.132659    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:29.139417    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:29.212269    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:29.557815    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:29.636964    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:29.639567    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:29.736882    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:30.068891    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:30.133410    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:30.139330    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:30.225530    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:30.558236    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:30.658913    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:30.659124    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:30.712702    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:31.058730    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:31.133737    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:31.139555    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:31.212640    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:31.558522    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:31.633741    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:31.639575    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:31.713212    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:32.058481    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:32.133750    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:32.139895    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:32.213793    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:32.558314    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:32.633898    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:32.639320    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:32.712501    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:33.058324    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:33.134098    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:33.139486    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:33.212933    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:33.558021    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:33.633575    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:33.639527    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:33.713394    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:34.057632    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:34.133638    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:34.139124    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:34.212720    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:34.558370    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:34.659583    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:34.659817    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:34.713047    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:35.058418    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:35.133549    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:35.139613    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:35.212538    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:35.558083    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:35.633126    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:35.638939    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:35.712837    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:36.059123    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:36.133189    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:36.139027    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:36.213783    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:36.557801    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:36.633248    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:36.638426    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:36.712556    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:37.057563    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:37.133400    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:37.139372    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:37.212699    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:37.558995    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:37.633387    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:37.639459    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:37.712809    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:38.058353    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:38.133984    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:38.138888    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:38.213147    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:38.558459    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:38.633172    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:38.639415    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:38.712180    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:39.058785    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:39.133528    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:39.139838    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:39.213488    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:39.558668    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:39.633918    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:39.639987    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:39.713465    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:40.061787    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:40.134847    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:40.139586    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:40.213256    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:40.559503    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:40.634378    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:40.638998    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:40.713252    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:41.058995    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:41.133560    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:41.139876    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:41.213230    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:41.565542    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:41.632909    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:41.640099    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:41.713444    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:42.059134    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:42.134349    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:42.140649    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:42.217903    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:42.558297    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:42.633904    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:42.638814    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:42.737018    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:43.059062    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:43.137816    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:43.139619    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:43.212862    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:43.559209    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:43.632975    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:43.639889    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:43.713072    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:44.058646    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:44.133660    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:44.139346    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:44.212939    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:44.561745    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:44.633226    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:44.639743    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:44.714019    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:45.063331    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:45.134670    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:45.140226    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:45.218338    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:45.558316    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:45.633566    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:45.639768    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:45.713108    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:46.058817    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:46.133217    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:46.139524    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:46.212974    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:46.264311    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 18:44:46.558686    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:46.633089    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:46.639074    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:46.713604    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:47.063850    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:47.134113    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:47.140974    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:47.213755    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:47.462752    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.198346753s)
	W1006 18:44:47.462840    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:44:47.462873    5102 retry.go:31] will retry after 21.39241557s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:44:47.559460    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:47.633929    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:47.640182    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:47.712914    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:48.058864    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:48.133044    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:48.139512    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:48.212809    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:48.559165    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:48.633583    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:48.640444    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:48.713036    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:49.059103    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:49.133137    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:49.138892    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:49.213496    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:49.558070    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:49.633048    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:49.638831    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:49.712523    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:50.058300    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:50.133983    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:50.139068    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:50.212919    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:50.558044    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:50.633098    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:50.638926    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:50.713175    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:51.059395    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:51.133905    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:51.139276    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:51.212911    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:51.559301    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:51.633882    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:51.639295    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:51.712742    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:52.059786    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:52.134339    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:52.139845    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:52.213255    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:52.566282    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:52.633931    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:52.640043    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:52.713275    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:53.058120    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:53.133220    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:53.139143    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:53.213286    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:53.558342    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:53.632966    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:53.639332    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:53.713142    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:54.059945    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:54.132957    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:54.139522    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:54.214042    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:54.559354    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:54.659560    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:54.659683    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:54.712119    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:55.059400    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:55.133657    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:55.139671    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:55.212727    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:55.559686    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:55.632674    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:55.639348    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:55.713377    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:56.058301    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:56.133482    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:56.139613    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:56.212616    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:56.559483    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:56.634733    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:56.640765    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:56.712723    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:57.059021    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:57.133248    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:57.140167    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:57.212898    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:57.559547    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:57.634661    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:57.639478    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:57.735113    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:58.058671    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:58.133518    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:58.139370    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:58.212409    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:58.576418    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:58.636063    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:58.639850    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:58.735289    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:59.059204    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:59.160617    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:59.160813    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:59.212779    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:59.559591    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:59.633587    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:59.639287    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:59.712942    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:00.061275    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:00.137417    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:00.143340    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:00.213776    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:00.575887    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:00.646228    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:00.647574    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:00.714999    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:01.059217    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:01.133689    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:01.140228    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:01.213191    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:01.558590    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:01.633371    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:01.641709    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:01.713401    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:02.058427    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:02.134171    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:02.139069    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:02.212386    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:02.558788    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:02.633345    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:02.639026    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:02.713233    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:03.059425    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:03.134000    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:03.139146    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:03.213544    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:03.557654    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:03.633321    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:03.638786    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:03.712677    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:04.058485    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:04.133851    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:04.138820    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:04.213654    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:04.559356    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:04.634028    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:04.639040    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:04.713419    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:05.059336    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:05.134362    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:05.140529    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:05.212868    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:05.563010    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:05.633644    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:05.639924    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:05.713163    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:06.062662    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:06.142473    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:06.145443    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:06.214715    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:06.579168    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:06.750663    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:06.750939    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:06.751638    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:07.057895    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:07.133711    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:07.139436    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:07.212434    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:07.558841    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:07.632980    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:07.639030    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:07.713338    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:08.058426    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:08.133869    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:08.138999    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:08.213268    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:08.558352    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:08.633638    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:08.639616    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:08.712767    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:08.856179    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 18:45:09.058593    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:09.133961    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:09.138896    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:09.213924    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:09.559629    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:09.634013    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:09.638683    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:09.712538    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:10.048681    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.1924654s)
	W1006 18:45:10.048718    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:45:10.048736    5102 retry.go:31] will retry after 34.297265778s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:45:10.058896    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:10.133216    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:10.139236    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:10.212576    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:10.558997    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:10.633297    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:10.639228    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:10.712441    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:11.058390    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:11.134233    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:11.139431    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:11.212966    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:11.559036    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:11.632974    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:11.638978    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:11.712958    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:12.058422    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:12.159507    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:12.159679    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:12.212719    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:12.558224    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:12.634913    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:12.639895    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:12.712632    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:13.058324    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:13.133416    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:13.139483    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:13.212311    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:13.558712    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:13.634505    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:13.639691    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:13.713301    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:14.058337    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:14.133847    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:14.139068    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:14.212664    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:14.558766    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:14.632669    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:14.650282    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:14.712643    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:15.064102    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:15.133676    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:15.139910    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:15.213250    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:15.558148    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:15.642558    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:15.652634    5102 kapi.go:107] duration metric: took 1m23.521914892s to wait for kubernetes.io/minikube-addons=registry ...
	I1006 18:45:15.713607    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:16.058733    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:16.133098    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:16.213336    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:16.563676    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:16.633227    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:16.733236    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:17.058521    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:17.134551    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:17.212627    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:17.558206    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:17.633876    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:17.713130    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:18.059076    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:18.133513    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:18.212892    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:18.558906    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:18.633833    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:18.713142    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:19.061771    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:19.133962    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:19.213337    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:19.561088    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:19.636493    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:19.713010    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:20.059568    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:20.134293    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:20.218793    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:20.559625    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:20.651263    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:20.714071    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:21.058614    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:21.133456    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:21.212210    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:21.558875    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:21.633895    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:21.713600    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:22.059004    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:22.133895    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:22.213590    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:22.558736    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:22.633636    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:22.712740    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:23.059064    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:23.133866    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:23.212656    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:23.559303    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:23.633807    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:23.728488    5102 kapi.go:107] duration metric: took 1m28.019250269s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1006 18:45:23.731941    5102 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-442328 cluster.
	I1006 18:45:23.735014    5102 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1006 18:45:23.738153    5102 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1006 18:45:24.058366    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:24.158922    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:24.559206    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:24.637340    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:25.059396    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:25.134786    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:25.559177    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:25.633300    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:26.059641    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:26.132893    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:26.558701    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:26.633398    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:27.059399    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:27.133633    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:27.558062    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:27.633454    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:28.058543    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:28.134201    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:28.557835    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:28.638614    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:29.059143    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:29.133582    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:29.558844    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:29.632825    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:30.072376    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:30.165606    5102 kapi.go:107] duration metric: took 1m38.036053584s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1006 18:45:30.558276    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:31.064370    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:31.559152    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:32.059052    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:32.559117    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:33.059026    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:33.558487    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:34.058268    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:34.558264    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:35.059337    5102 kapi.go:107] duration metric: took 1m42.504736473s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1006 18:45:44.346181    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 18:45:45.411689    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.065470802s)
	W1006 18:45:45.411759    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 18:45:45.411855    5102 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1006 18:45:45.456224    5102 out.go:179] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, registry-creds, cloud-spanner, ingress-dns, storage-provisioner, default-storageclass, metrics-server, yakd, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1006 18:45:45.471259    5102 addons.go:514] duration metric: took 1m59.880667657s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin registry-creds cloud-spanner ingress-dns storage-provisioner default-storageclass metrics-server yakd volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1006 18:45:45.471338    5102 start.go:246] waiting for cluster config update ...
	I1006 18:45:45.471360    5102 start.go:255] writing updated cluster config ...
	I1006 18:45:45.472427    5102 ssh_runner.go:195] Run: rm -f paused
	I1006 18:45:45.476964    5102 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 18:45:45.481673    5102 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bx5cf" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:45:45.505271    5102 pod_ready.go:94] pod "coredns-66bc5c9577-bx5cf" is "Ready"
	I1006 18:45:45.505303    5102 pod_ready.go:86] duration metric: took 23.595415ms for pod "coredns-66bc5c9577-bx5cf" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:45:45.508654    5102 pod_ready.go:83] waiting for pod "etcd-addons-442328" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:45:45.513990    5102 pod_ready.go:94] pod "etcd-addons-442328" is "Ready"
	I1006 18:45:45.514022    5102 pod_ready.go:86] duration metric: took 5.340153ms for pod "etcd-addons-442328" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:45:45.517944    5102 pod_ready.go:83] waiting for pod "kube-apiserver-addons-442328" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:45:45.523637    5102 pod_ready.go:94] pod "kube-apiserver-addons-442328" is "Ready"
	I1006 18:45:45.523674    5102 pod_ready.go:86] duration metric: took 5.699574ms for pod "kube-apiserver-addons-442328" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:45:45.526658    5102 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-442328" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:45:45.881700    5102 pod_ready.go:94] pod "kube-controller-manager-addons-442328" is "Ready"
	I1006 18:45:45.881733    5102 pod_ready.go:86] duration metric: took 355.048417ms for pod "kube-controller-manager-addons-442328" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:45:46.081054    5102 pod_ready.go:83] waiting for pod "kube-proxy-n686b" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:45:46.481067    5102 pod_ready.go:94] pod "kube-proxy-n686b" is "Ready"
	I1006 18:45:46.481098    5102 pod_ready.go:86] duration metric: took 400.014045ms for pod "kube-proxy-n686b" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:45:46.681407    5102 pod_ready.go:83] waiting for pod "kube-scheduler-addons-442328" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:45:47.081448    5102 pod_ready.go:94] pod "kube-scheduler-addons-442328" is "Ready"
	I1006 18:45:47.081477    5102 pod_ready.go:86] duration metric: took 400.039212ms for pod "kube-scheduler-addons-442328" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:45:47.081490    5102 pod_ready.go:40] duration metric: took 1.604491048s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 18:45:47.489876    5102 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1006 18:45:47.495228    5102 out.go:179] * Done! kubectl is now configured to use "addons-442328" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 06 18:48:39 addons-442328 crio[833]: time="2025-10-06T18:48:39.492329612Z" level=info msg="Removed container 31d8648ea03414701a734f2b39649e53dd3a2fb3dd3960e95b94986836c00694: kube-system/registry-creds-764b6fb674-pgnhk/registry-creds" id=4a5ace8e-671e-4ba2-b9c4-683f1b5ad423 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 06 18:48:44 addons-442328 crio[833]: time="2025-10-06T18:48:44.130247802Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-lcr8t/POD" id=ce74adcf-9598-4750-aaa9-752409a2fa82 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 18:48:44 addons-442328 crio[833]: time="2025-10-06T18:48:44.130317351Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 18:48:44 addons-442328 crio[833]: time="2025-10-06T18:48:44.143383612Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-lcr8t Namespace:default ID:bf9f1bf48cf75f3ee20438547d73f3885e313f6b65c1d86d4ec8b35d4230b005 UID:d790f4d0-054b-49e3-ab68-5de5ce7df219 NetNS:/var/run/netns/6ff29ebc-7b22-421c-badc-40cdb4b57f48 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002431150}] Aliases:map[]}"
	Oct 06 18:48:44 addons-442328 crio[833]: time="2025-10-06T18:48:44.143599833Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-lcr8t to CNI network \"kindnet\" (type=ptp)"
	Oct 06 18:48:44 addons-442328 crio[833]: time="2025-10-06T18:48:44.157295818Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-lcr8t Namespace:default ID:bf9f1bf48cf75f3ee20438547d73f3885e313f6b65c1d86d4ec8b35d4230b005 UID:d790f4d0-054b-49e3-ab68-5de5ce7df219 NetNS:/var/run/netns/6ff29ebc-7b22-421c-badc-40cdb4b57f48 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002431150}] Aliases:map[]}"
	Oct 06 18:48:44 addons-442328 crio[833]: time="2025-10-06T18:48:44.157448118Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-lcr8t for CNI network kindnet (type=ptp)"
	Oct 06 18:48:44 addons-442328 crio[833]: time="2025-10-06T18:48:44.165829339Z" level=info msg="Ran pod sandbox bf9f1bf48cf75f3ee20438547d73f3885e313f6b65c1d86d4ec8b35d4230b005 with infra container: default/hello-world-app-5d498dc89-lcr8t/POD" id=ce74adcf-9598-4750-aaa9-752409a2fa82 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 18:48:44 addons-442328 crio[833]: time="2025-10-06T18:48:44.16697515Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=7db83fdf-58bb-4895-8259-826318a8c756 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 18:48:44 addons-442328 crio[833]: time="2025-10-06T18:48:44.167136994Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=7db83fdf-58bb-4895-8259-826318a8c756 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 18:48:44 addons-442328 crio[833]: time="2025-10-06T18:48:44.167197336Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=7db83fdf-58bb-4895-8259-826318a8c756 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 18:48:44 addons-442328 crio[833]: time="2025-10-06T18:48:44.170775762Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=b4c68f81-da59-4785-a900-0d37e79810d6 name=/runtime.v1.ImageService/PullImage
	Oct 06 18:48:44 addons-442328 crio[833]: time="2025-10-06T18:48:44.173277762Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 06 18:48:44 addons-442328 crio[833]: time="2025-10-06T18:48:44.932201396Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=b4c68f81-da59-4785-a900-0d37e79810d6 name=/runtime.v1.ImageService/PullImage
	Oct 06 18:48:44 addons-442328 crio[833]: time="2025-10-06T18:48:44.933036493Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=78d5287b-8711-41e9-9930-d16dc4d71dc4 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 18:48:44 addons-442328 crio[833]: time="2025-10-06T18:48:44.937281198Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=2e565854-9f3d-452d-80b3-7464fb8fdf83 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 18:48:44 addons-442328 crio[833]: time="2025-10-06T18:48:44.950430021Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-lcr8t/hello-world-app" id=ba316dac-86df-4345-9e93-d893a51c793c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 18:48:44 addons-442328 crio[833]: time="2025-10-06T18:48:44.951678129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 18:48:44 addons-442328 crio[833]: time="2025-10-06T18:48:44.968252184Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 18:48:44 addons-442328 crio[833]: time="2025-10-06T18:48:44.968608545Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/031451142e2c29ccdf9f566cde231d729b76d8b16063cc35af0f906b03d49d3f/merged/etc/passwd: no such file or directory"
	Oct 06 18:48:44 addons-442328 crio[833]: time="2025-10-06T18:48:44.968706164Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/031451142e2c29ccdf9f566cde231d729b76d8b16063cc35af0f906b03d49d3f/merged/etc/group: no such file or directory"
	Oct 06 18:48:44 addons-442328 crio[833]: time="2025-10-06T18:48:44.969051825Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 18:48:44 addons-442328 crio[833]: time="2025-10-06T18:48:44.996312516Z" level=info msg="Created container ffcc026b5296bb2761eb47db57b130b39c1f2db1039324306e716b46dd4003e9: default/hello-world-app-5d498dc89-lcr8t/hello-world-app" id=ba316dac-86df-4345-9e93-d893a51c793c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 18:48:44 addons-442328 crio[833]: time="2025-10-06T18:48:44.997962984Z" level=info msg="Starting container: ffcc026b5296bb2761eb47db57b130b39c1f2db1039324306e716b46dd4003e9" id=14b6cb39-97a6-4461-bdd9-be4bb7adf99d name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 18:48:45 addons-442328 crio[833]: time="2025-10-06T18:48:45.004503561Z" level=info msg="Started container" PID=7095 containerID=ffcc026b5296bb2761eb47db57b130b39c1f2db1039324306e716b46dd4003e9 description=default/hello-world-app-5d498dc89-lcr8t/hello-world-app id=14b6cb39-97a6-4461-bdd9-be4bb7adf99d name=/runtime.v1.RuntimeService/StartContainer sandboxID=bf9f1bf48cf75f3ee20438547d73f3885e313f6b65c1d86d4ec8b35d4230b005
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	ffcc026b5296b       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        1 second ago        Running             hello-world-app                          0                   bf9f1bf48cf75       hello-world-app-5d498dc89-lcr8t             default
	d2d6e251cbbef       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             7 seconds ago       Exited              registry-creds                           1                   ebd368dabe8ed       registry-creds-764b6fb674-pgnhk             kube-system
	b826ca6b83c02       docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac                                              2 minutes ago       Running             nginx                                    0                   2e76308f7260c       nginx                                       default
	835e7ff88c299       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago       Running             busybox                                  0                   977564ff1914f       busybox                                     default
	ebe3bd43a396e       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago       Running             csi-snapshotter                          0                   e2b2f3f9b3ef2       csi-hostpathplugin-g7kvd                    kube-system
	4cd182a7b457f       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago       Running             csi-provisioner                          0                   e2b2f3f9b3ef2       csi-hostpathplugin-g7kvd                    kube-system
	08ff7320e4ddc       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago       Running             liveness-probe                           0                   e2b2f3f9b3ef2       csi-hostpathplugin-g7kvd                    kube-system
	9ca99ca5ed94b       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago       Running             hostpath                                 0                   e2b2f3f9b3ef2       csi-hostpathplugin-g7kvd                    kube-system
	55e40e0693ffa       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             3 minutes ago       Running             controller                               0                   55b478414d411       ingress-nginx-controller-675c5ddd98-55rt2   ingress-nginx
	abd23f19f3d54       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago       Running             gcp-auth                                 0                   67a9cd802d4cd       gcp-auth-78565c9fb4-5qbhb                   gcp-auth
	14f703b6467c6       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago       Running             node-driver-registrar                    0                   e2b2f3f9b3ef2       csi-hostpathplugin-g7kvd                    kube-system
	7d6f337fccbd5       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:74b72c3673aff7e1fa7c3ebae80b5dbe5446ce1906ef8d4f98d4b9f6e72c88e1                            3 minutes ago       Running             gadget                                   0                   05450c520b570       gadget-6t8nb                                gadget
	6172f7270c3d6       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago       Running             registry-proxy                           0                   15431d9de0947       registry-proxy-zqdsp                        kube-system
	400a8af72e5ff       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   3 minutes ago       Exited              patch                                    0                   60ce7451efd4c       ingress-nginx-admission-patch-7ksts         ingress-nginx
	8b64f9fe4af61       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     3 minutes ago       Running             nvidia-device-plugin-ctr                 0                   497a91a0ca1cc       nvidia-device-plugin-daemonset-2ptdw        kube-system
	aa4f571114f60       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago       Running             csi-external-health-monitor-controller   0                   e2b2f3f9b3ef2       csi-hostpathplugin-g7kvd                    kube-system
	95373c9fa405f       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago       Running             local-path-provisioner                   0                   875a8baf32253       local-path-provisioner-648f6765c9-lgzk2     local-path-storage
	c72005fae6fa3       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago       Running             yakd                                     0                   7268c15204001       yakd-dashboard-5ff678cb9-ffptf              yakd-dashboard
	e949d4e668015       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago       Running             csi-resizer                              0                   942b2f7e1d62f       csi-hostpath-resizer-0                      kube-system
	ae20a5867a78c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   3 minutes ago       Exited              create                                   0                   91934e9602ec3       ingress-nginx-admission-create-xspd2        ingress-nginx
	eaea2302c49ad       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago       Running             metrics-server                           0                   897a1cb447e18       metrics-server-85b7d694d7-swsbd             kube-system
	156dcf2838ea3       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago       Running             volume-snapshot-controller               0                   447d906949aa5       snapshot-controller-7d9fbc56b8-glwdc        kube-system
	f77ff0e3011bc       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago       Running             volume-snapshot-controller               0                   69bcfd32a0d18       snapshot-controller-7d9fbc56b8-jg8hf        kube-system
	47db75a0747be       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             4 minutes ago       Running             csi-attacher                             0                   26105da3f9b8e       csi-hostpath-attacher-0                     kube-system
	c540f260b155a       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               4 minutes ago       Running             minikube-ingress-dns                     0                   b7ed222a8ba2f       kube-ingress-dns-minikube                   kube-system
	d6ed31f83769e       gcr.io/cloud-spanner-emulator/emulator@sha256:77d0cd8103fe32875bbb04c070a7d1db292093b65d11c99c00cf39e8a13852f5                               4 minutes ago       Running             cloud-spanner-emulator                   0                   f499bced467a0       cloud-spanner-emulator-85f6b7fc65-fcmlt     default
	d982508a1963a       docker.io/library/registry@sha256:f26c394e5b7c3a707c7373c3e9388e44f0d5bdd3def19652c6bd2ac1a0fa6758                                           4 minutes ago       Running             registry                                 0                   3b74f761b66c0       registry-66898fdd98-k4cb6                   kube-system
	4a454595ca74d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago       Running             storage-provisioner                      0                   8f00c80f9e94d       storage-provisioner                         kube-system
	1ee286915381b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago       Running             coredns                                  0                   f2232b4fa0b3c       coredns-66bc5c9577-bx5cf                    kube-system
	50d122dfe0df6       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             4 minutes ago       Running             kube-proxy                               0                   d621058fceeef       kube-proxy-n686b                            kube-system
	5c527c40e2db3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago       Running             kindnet-cni                              0                   0a6579d0c7a4d       kindnet-g2tkh                               kube-system
	8f11330482798       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago       Running             kube-apiserver                           0                   1ca73eb868415       kube-apiserver-addons-442328                kube-system
	4b78f6f126d3c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago       Running             kube-scheduler                           0                   0580ee0ec7562       kube-scheduler-addons-442328                kube-system
	c436c27fc179e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago       Running             etcd                                     0                   1f2cd306074d5       etcd-addons-442328                          kube-system
	9bb11c78f5525       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago       Running             kube-controller-manager                  0                   c65cfdb6c3e2f       kube-controller-manager-addons-442328       kube-system
	
	
	==> coredns [1ee286915381b5158fa5a27a330f7e53f554ce0fe7d429c57ec3ea2868775dc6] <==
	[INFO] 10.244.0.17:35269 - 1420 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.003053911s
	[INFO] 10.244.0.17:35269 - 38230 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00012368s
	[INFO] 10.244.0.17:35269 - 56459 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000079896s
	[INFO] 10.244.0.17:34847 - 21209 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000198743s
	[INFO] 10.244.0.17:34847 - 20747 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000086469s
	[INFO] 10.244.0.17:60064 - 49807 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091802s
	[INFO] 10.244.0.17:60064 - 49626 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000219067s
	[INFO] 10.244.0.17:34007 - 1338 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000113915s
	[INFO] 10.244.0.17:34007 - 1141 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000141527s
	[INFO] 10.244.0.17:51545 - 34372 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001577599s
	[INFO] 10.244.0.17:51545 - 34192 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00161138s
	[INFO] 10.244.0.17:46963 - 36903 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00012651s
	[INFO] 10.244.0.17:46963 - 36755 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000183529s
	[INFO] 10.244.0.20:52334 - 63421 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000271122s
	[INFO] 10.244.0.20:36291 - 10235 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000153826s
	[INFO] 10.244.0.20:44980 - 30076 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000255778s
	[INFO] 10.244.0.20:52634 - 5665 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000155459s
	[INFO] 10.244.0.20:39976 - 62810 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000239712s
	[INFO] 10.244.0.20:57460 - 41861 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000181306s
	[INFO] 10.244.0.20:50529 - 23139 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002088884s
	[INFO] 10.244.0.20:44821 - 2583 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002507326s
	[INFO] 10.244.0.20:39688 - 51821 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.002640901s
	[INFO] 10.244.0.20:40418 - 26182 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002897491s
	[INFO] 10.244.0.23:33827 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00015541s
	[INFO] 10.244.0.23:44308 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000107663s
	
	
	==> describe nodes <==
	Name:               addons-442328
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-442328
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81
	                    minikube.k8s.io/name=addons-442328
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_06T18_43_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-442328
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-442328"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 06 Oct 2025 18:43:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-442328
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 06 Oct 2025 18:48:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 06 Oct 2025 18:48:36 +0000   Mon, 06 Oct 2025 18:43:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 06 Oct 2025 18:48:36 +0000   Mon, 06 Oct 2025 18:43:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 06 Oct 2025 18:48:36 +0000   Mon, 06 Oct 2025 18:43:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 06 Oct 2025 18:48:36 +0000   Mon, 06 Oct 2025 18:44:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-442328
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 2553b47105a84e17acadae7422faa4a6
	  System UUID:                f9ea306a-7c47-4dcd-b3b3-b1912080fbb2
	  Boot ID:                    28123a58-a718-41b5-bac7-da83ff2d3a0d
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m58s
	  default                     cloud-spanner-emulator-85f6b7fc65-fcmlt      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  default                     hello-world-app-5d498dc89-lcr8t              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  gadget                      gadget-6t8nb                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  gcp-auth                    gcp-auth-78565c9fb4-5qbhb                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-55rt2    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m55s
	  kube-system                 coredns-66bc5c9577-bx5cf                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m1s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 csi-hostpathplugin-g7kvd                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 etcd-addons-442328                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m6s
	  kube-system                 kindnet-g2tkh                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m2s
	  kube-system                 kube-apiserver-addons-442328                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-controller-manager-addons-442328        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 kube-proxy-n686b                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 kube-scheduler-addons-442328                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 metrics-server-85b7d694d7-swsbd              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m56s
	  kube-system                 nvidia-device-plugin-daemonset-2ptdw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 registry-66898fdd98-k4cb6                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 registry-creds-764b6fb674-pgnhk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 registry-proxy-zqdsp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 snapshot-controller-7d9fbc56b8-glwdc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 snapshot-controller-7d9fbc56b8-jg8hf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  local-path-storage          local-path-provisioner-648f6765c9-lgzk2      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-ffptf               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m58s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  5m13s (x8 over 5m13s)  kubelet          Node addons-442328 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m13s (x8 over 5m13s)  kubelet          Node addons-442328 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m13s (x8 over 5m13s)  kubelet          Node addons-442328 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m7s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m7s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m6s                   kubelet          Node addons-442328 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m6s                   kubelet          Node addons-442328 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m6s                   kubelet          Node addons-442328 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m2s                   node-controller  Node addons-442328 event: Registered Node addons-442328 in Controller
	  Normal   NodeReady                4m20s                  kubelet          Node addons-442328 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 6 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015541] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.518273] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033731] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.758438] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.412532] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 6 18:43] overlayfs: idmapped layers are currently not supported
	[  +0.067491] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [c436c27fc179e1d50b93c6d7a510f2ccc24e197558a2826868724e75f1d2d96f] <==
	{"level":"warn","ts":"2025-10-06T18:43:36.078136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.113890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.137239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.172655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.185158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.208974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.226882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.271511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.272365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.284386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.298584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.318314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.344566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.364165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.387643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.413811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.432890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.448754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.547767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:52.768568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:52.790705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:44:14.328400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:44:14.342776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:44:14.367987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:44:14.383177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44600","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [abd23f19f3d541ab9d91e25a7ddfe45774c89bb4aeb4147381d6afe6e6f4c94c] <==
	2025/10/06 18:45:23 GCP Auth Webhook started!
	2025/10/06 18:45:48 Ready to marshal response ...
	2025/10/06 18:45:48 Ready to write response ...
	2025/10/06 18:45:48 Ready to marshal response ...
	2025/10/06 18:45:48 Ready to write response ...
	2025/10/06 18:45:48 Ready to marshal response ...
	2025/10/06 18:45:48 Ready to write response ...
	2025/10/06 18:46:08 Ready to marshal response ...
	2025/10/06 18:46:08 Ready to write response ...
	2025/10/06 18:46:15 Ready to marshal response ...
	2025/10/06 18:46:15 Ready to write response ...
	2025/10/06 18:46:24 Ready to marshal response ...
	2025/10/06 18:46:24 Ready to write response ...
	2025/10/06 18:46:38 Ready to marshal response ...
	2025/10/06 18:46:38 Ready to write response ...
	2025/10/06 18:46:48 Ready to marshal response ...
	2025/10/06 18:46:48 Ready to write response ...
	2025/10/06 18:46:48 Ready to marshal response ...
	2025/10/06 18:46:48 Ready to write response ...
	2025/10/06 18:46:56 Ready to marshal response ...
	2025/10/06 18:46:56 Ready to write response ...
	2025/10/06 18:48:43 Ready to marshal response ...
	2025/10/06 18:48:43 Ready to write response ...
	
	
	==> kernel <==
	 18:48:46 up 31 min,  0 user,  load average: 0.42, 1.01, 0.55
	Linux addons-442328 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5c527c40e2db3acd1fa72c315739a326754a5efad7aa6b520cc2b554f0eb3c21] <==
	I1006 18:46:36.454666       1 main.go:301] handling current node
	I1006 18:46:46.448936       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 18:46:46.448974       1 main.go:301] handling current node
	I1006 18:46:56.452372       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 18:46:56.452494       1 main.go:301] handling current node
	I1006 18:47:06.454113       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 18:47:06.454212       1 main.go:301] handling current node
	I1006 18:47:16.448828       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 18:47:16.448865       1 main.go:301] handling current node
	I1006 18:47:26.451785       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 18:47:26.451905       1 main.go:301] handling current node
	I1006 18:47:36.456652       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 18:47:36.456701       1 main.go:301] handling current node
	I1006 18:47:46.456534       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 18:47:46.456584       1 main.go:301] handling current node
	I1006 18:47:56.454162       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 18:47:56.454197       1 main.go:301] handling current node
	I1006 18:48:06.456328       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 18:48:06.456362       1 main.go:301] handling current node
	I1006 18:48:16.455786       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 18:48:16.455819       1 main.go:301] handling current node
	I1006 18:48:26.451756       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 18:48:26.451794       1 main.go:301] handling current node
	I1006 18:48:36.451782       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 18:48:36.451819       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8f11330482798f65fa8bce0a640073ba69894daff8d3b583e82b1d331b4098ea] <==
	W1006 18:44:51.551568       1 handler_proxy.go:99] no RequestInfo found in the context
	E1006 18:44:51.551624       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1006 18:44:51.551638       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1006 18:44:51.556686       1 handler_proxy.go:99] no RequestInfo found in the context
	E1006 18:44:51.556774       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1006 18:44:51.556785       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1006 18:44:58.451396       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.134.170:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.134.170:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.134.170:443: connect: connection refused" logger="UnhandledError"
	W1006 18:44:58.451981       1 handler_proxy.go:99] no RequestInfo found in the context
	E1006 18:44:58.452054       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1006 18:44:58.453649       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.134.170:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.134.170:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.134.170:443: connect: connection refused" logger="UnhandledError"
	E1006 18:44:58.477230       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.134.170:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.134.170:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.134.170:443: connect: connection refused" logger="UnhandledError"
	I1006 18:44:58.630548       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1006 18:45:56.624235       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58670: use of closed network connection
	E1006 18:45:56.868923       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58694: use of closed network connection
	I1006 18:46:24.433395       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1006 18:46:24.756504       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.55.190"}
	I1006 18:46:27.224630       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1006 18:46:47.059090       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1006 18:48:43.984283       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.22.172"}
	
	
	==> kube-controller-manager [9bb11c78f552514a88dbee09642e0026e29ed4e8e4982ce305d891d657f27d0b] <==
	I1006 18:43:44.302483       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1006 18:43:44.316677       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1006 18:43:44.323847       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1006 18:43:44.323920       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1006 18:43:44.323941       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1006 18:43:44.323955       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1006 18:43:44.323962       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1006 18:43:44.330437       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1006 18:43:44.332816       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-442328" podCIDRs=["10.244.0.0/24"]
	I1006 18:43:44.341730       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1006 18:43:44.344049       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 18:43:44.348677       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 18:43:44.348757       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1006 18:43:44.348772       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1006 18:43:50.664411       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1006 18:44:14.321562       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1006 18:44:14.321697       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1006 18:44:14.321750       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1006 18:44:14.351883       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1006 18:44:14.355819       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1006 18:44:14.422377       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1006 18:44:14.456110       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 18:44:29.295872       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1006 18:44:44.431452       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1006 18:44:44.491422       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [50d122dfe0df6b0e19a330b8d906a26a7160818c3c2342f861b94db61dbde4ce] <==
	I1006 18:43:46.968787       1 server_linux.go:53] "Using iptables proxy"
	I1006 18:43:47.050311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1006 18:43:47.151174       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1006 18:43:47.151216       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1006 18:43:47.151311       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1006 18:43:47.249157       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 18:43:47.249199       1 server_linux.go:132] "Using iptables Proxier"
	I1006 18:43:47.255179       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1006 18:43:47.265431       1 server.go:527] "Version info" version="v1.34.1"
	I1006 18:43:47.265458       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 18:43:47.268782       1 config.go:200] "Starting service config controller"
	I1006 18:43:47.268794       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1006 18:43:47.268816       1 config.go:106] "Starting endpoint slice config controller"
	I1006 18:43:47.268822       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1006 18:43:47.268840       1 config.go:403] "Starting serviceCIDR config controller"
	I1006 18:43:47.268844       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1006 18:43:47.270719       1 config.go:309] "Starting node config controller"
	I1006 18:43:47.270728       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1006 18:43:47.270735       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1006 18:43:47.396351       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1006 18:43:47.396385       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1006 18:43:47.396414       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4b78f6f126d3c9aae8097883cf29eb9096f58eed0eade4dc15720c72d0617907] <==
	E1006 18:43:37.380956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1006 18:43:37.381054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1006 18:43:37.381124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1006 18:43:37.384051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1006 18:43:37.384219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1006 18:43:37.384314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1006 18:43:37.386615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1006 18:43:37.386831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1006 18:43:37.386948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1006 18:43:37.387051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1006 18:43:37.389000       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1006 18:43:37.389214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1006 18:43:37.389344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1006 18:43:37.389501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1006 18:43:37.389608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1006 18:43:38.206261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1006 18:43:38.284336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1006 18:43:38.320585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1006 18:43:38.341045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1006 18:43:38.341657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1006 18:43:38.399042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1006 18:43:38.481764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1006 18:43:38.485828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1006 18:43:38.514705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1006 18:43:39.071359       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 06 18:46:59 addons-442328 kubelet[1264]: I1006 18:46:59.874561    1264 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64f02e1e-cb38-43c8-940d-589d8b1d18c1" path="/var/lib/kubelet/pods/64f02e1e-cb38-43c8-940d-589d8b1d18c1/volumes"
	Oct 06 18:47:16 addons-442328 kubelet[1264]: I1006 18:47:16.870724    1264 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-k4cb6" secret="" err="secret \"gcp-auth\" not found"
	Oct 06 18:47:21 addons-442328 kubelet[1264]: I1006 18:47:21.871303    1264 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-zqdsp" secret="" err="secret \"gcp-auth\" not found"
	Oct 06 18:47:37 addons-442328 kubelet[1264]: I1006 18:47:37.870553    1264 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-2ptdw" secret="" err="secret \"gcp-auth\" not found"
	Oct 06 18:47:39 addons-442328 kubelet[1264]: I1006 18:47:39.945144    1264 scope.go:117] "RemoveContainer" containerID="51a14da27e04a3bc97dcc20db293d891de693e89d572e775fd5625f6ecb2ae5e"
	Oct 06 18:47:39 addons-442328 kubelet[1264]: I1006 18:47:39.956638    1264 scope.go:117] "RemoveContainer" containerID="91ca056fa9e3f548a0605149ce323952057175c6964da191b0aa2c0496b638a4"
	Oct 06 18:47:39 addons-442328 kubelet[1264]: I1006 18:47:39.966080    1264 scope.go:117] "RemoveContainer" containerID="b530240fa6406b5ca0228b0f16447301dd4d7a1f84bab51a9f1fef8727c3c803"
	Oct 06 18:47:39 addons-442328 kubelet[1264]: E1006 18:47:39.990051    1264 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d7d64afb546e60bf77595c377ce150b050d6c444e8067b158c12750fceec1318/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d7d64afb546e60bf77595c377ce150b050d6c444e8067b158c12750fceec1318/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/default_task-pv-pod-restore_b534812a-35c5-41b2-ab14-97ba7a613b28/task-pv-container/0.log" to get inode usage: stat /var/log/pods/default_task-pv-pod-restore_b534812a-35c5-41b2-ab14-97ba7a613b28/task-pv-container/0.log: no such file or directory
	Oct 06 18:47:39 addons-442328 kubelet[1264]: E1006 18:47:39.995160    1264 manager.go:1116] Failed to create existing container: /crio-97d0d9c5f5d204e1713ce31c039d2c9327d4c2ddd3571d6f1ddd867af40c7917: Error finding container 97d0d9c5f5d204e1713ce31c039d2c9327d4c2ddd3571d6f1ddd867af40c7917: Status 404 returned error can't find the container with id 97d0d9c5f5d204e1713ce31c039d2c9327d4c2ddd3571d6f1ddd867af40c7917
	Oct 06 18:48:36 addons-442328 kubelet[1264]: I1006 18:48:36.971513    1264 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-pgnhk" secret="" err="secret \"gcp-auth\" not found"
	Oct 06 18:48:36 addons-442328 kubelet[1264]: W1006 18:48:36.999016    1264 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8c722e206d4345afe9840c25fc51924a9dba29c6ed70fc1f945ffae17f5dfd27/crio-ebd368dabe8ed30ebf40ed94e8d36490a582eb997065f77b0eacc9d2f0b45f9c WatchSource:0}: Error finding container ebd368dabe8ed30ebf40ed94e8d36490a582eb997065f77b0eacc9d2f0b45f9c: Status 404 returned error can't find the container with id ebd368dabe8ed30ebf40ed94e8d36490a582eb997065f77b0eacc9d2f0b45f9c
	Oct 06 18:48:38 addons-442328 kubelet[1264]: I1006 18:48:38.463547    1264 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-pgnhk" secret="" err="secret \"gcp-auth\" not found"
	Oct 06 18:48:38 addons-442328 kubelet[1264]: I1006 18:48:38.463596    1264 scope.go:117] "RemoveContainer" containerID="31d8648ea03414701a734f2b39649e53dd3a2fb3dd3960e95b94986836c00694"
	Oct 06 18:48:38 addons-442328 kubelet[1264]: I1006 18:48:38.871220    1264 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-k4cb6" secret="" err="secret \"gcp-auth\" not found"
	Oct 06 18:48:39 addons-442328 kubelet[1264]: I1006 18:48:39.468982    1264 scope.go:117] "RemoveContainer" containerID="31d8648ea03414701a734f2b39649e53dd3a2fb3dd3960e95b94986836c00694"
	Oct 06 18:48:39 addons-442328 kubelet[1264]: I1006 18:48:39.469191    1264 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-pgnhk" secret="" err="secret \"gcp-auth\" not found"
	Oct 06 18:48:39 addons-442328 kubelet[1264]: I1006 18:48:39.469670    1264 scope.go:117] "RemoveContainer" containerID="d2d6e251cbbef910600098481d01e442740aa4bc23af4d5aff6cc6ebbad68024"
	Oct 06 18:48:39 addons-442328 kubelet[1264]: E1006 18:48:39.469904    1264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-pgnhk_kube-system(7ae00ec6-941d-4df4-a11f-08481b31d714)\"" pod="kube-system/registry-creds-764b6fb674-pgnhk" podUID="7ae00ec6-941d-4df4-a11f-08481b31d714"
	Oct 06 18:48:40 addons-442328 kubelet[1264]: I1006 18:48:40.476949    1264 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-pgnhk" secret="" err="secret \"gcp-auth\" not found"
	Oct 06 18:48:40 addons-442328 kubelet[1264]: I1006 18:48:40.477010    1264 scope.go:117] "RemoveContainer" containerID="d2d6e251cbbef910600098481d01e442740aa4bc23af4d5aff6cc6ebbad68024"
	Oct 06 18:48:40 addons-442328 kubelet[1264]: E1006 18:48:40.477174    1264 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-pgnhk_kube-system(7ae00ec6-941d-4df4-a11f-08481b31d714)\"" pod="kube-system/registry-creds-764b6fb674-pgnhk" podUID="7ae00ec6-941d-4df4-a11f-08481b31d714"
	Oct 06 18:48:40 addons-442328 kubelet[1264]: I1006 18:48:40.870890    1264 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-zqdsp" secret="" err="secret \"gcp-auth\" not found"
	Oct 06 18:48:43 addons-442328 kubelet[1264]: I1006 18:48:43.884687    1264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h9mz\" (UniqueName: \"kubernetes.io/projected/d790f4d0-054b-49e3-ab68-5de5ce7df219-kube-api-access-4h9mz\") pod \"hello-world-app-5d498dc89-lcr8t\" (UID: \"d790f4d0-054b-49e3-ab68-5de5ce7df219\") " pod="default/hello-world-app-5d498dc89-lcr8t"
	Oct 06 18:48:43 addons-442328 kubelet[1264]: I1006 18:48:43.884736    1264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d790f4d0-054b-49e3-ab68-5de5ce7df219-gcp-creds\") pod \"hello-world-app-5d498dc89-lcr8t\" (UID: \"d790f4d0-054b-49e3-ab68-5de5ce7df219\") " pod="default/hello-world-app-5d498dc89-lcr8t"
	Oct 06 18:48:44 addons-442328 kubelet[1264]: W1006 18:48:44.165285    1264 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8c722e206d4345afe9840c25fc51924a9dba29c6ed70fc1f945ffae17f5dfd27/crio-bf9f1bf48cf75f3ee20438547d73f3885e313f6b65c1d86d4ec8b35d4230b005 WatchSource:0}: Error finding container bf9f1bf48cf75f3ee20438547d73f3885e313f6b65c1d86d4ec8b35d4230b005: Status 404 returned error can't find the container with id bf9f1bf48cf75f3ee20438547d73f3885e313f6b65c1d86d4ec8b35d4230b005
	
	
	==> storage-provisioner [4a454595ca74de219ba83b946bf8f8f21366c2454667c8f983e1248a703aee58] <==
	W1006 18:48:21.561463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:48:23.564813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:48:23.571498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:48:25.574446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:48:25.578948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:48:27.581687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:48:27.586047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:48:29.590559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:48:29.595603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:48:31.598244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:48:31.602609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:48:33.605363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:48:33.619198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:48:35.622056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:48:35.628982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:48:37.631637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:48:37.638776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:48:39.642521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:48:39.648870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:48:41.651430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:48:41.658408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:48:43.662921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:48:43.669090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:48:45.673056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:48:45.678233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-442328 -n addons-442328
helpers_test.go:269: (dbg) Run:  kubectl --context addons-442328 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-xspd2 ingress-nginx-admission-patch-7ksts
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-442328 describe pod ingress-nginx-admission-create-xspd2 ingress-nginx-admission-patch-7ksts
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-442328 describe pod ingress-nginx-admission-create-xspd2 ingress-nginx-admission-patch-7ksts: exit status 1 (92.948761ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-xspd2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-7ksts" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-442328 describe pod ingress-nginx-admission-create-xspd2 ingress-nginx-admission-patch-7ksts: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-442328 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-442328 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (254.326887ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 18:48:47.649732   14759 out.go:360] Setting OutFile to fd 1 ...
	I1006 18:48:47.650018   14759 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:48:47.650053   14759 out.go:374] Setting ErrFile to fd 2...
	I1006 18:48:47.650074   14759 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:48:47.650353   14759 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 18:48:47.650677   14759 mustload.go:65] Loading cluster: addons-442328
	I1006 18:48:47.651084   14759 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:48:47.651126   14759 addons.go:606] checking whether the cluster is paused
	I1006 18:48:47.651254   14759 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:48:47.651295   14759 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:48:47.651813   14759 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:48:47.668916   14759 ssh_runner.go:195] Run: systemctl --version
	I1006 18:48:47.668981   14759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:48:47.690174   14759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:48:47.787483   14759 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 18:48:47.787573   14759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 18:48:47.822621   14759 cri.go:89] found id: "d2d6e251cbbef910600098481d01e442740aa4bc23af4d5aff6cc6ebbad68024"
	I1006 18:48:47.822639   14759 cri.go:89] found id: "ebe3bd43a396ef825f9ce2383a8f854705eb0fe8cc731998dc8e54caa0c556da"
	I1006 18:48:47.822644   14759 cri.go:89] found id: "4cd182a7b457fdd19f5abf043db3fae987b3a227baa70e7cd0251be02e768d79"
	I1006 18:48:47.822654   14759 cri.go:89] found id: "08ff7320e4ddcb6923c21c744037216228cafd072dc09cfe448a83bc44918154"
	I1006 18:48:47.822658   14759 cri.go:89] found id: "9ca99ca5ed94b679d8bf6fa05d9e992d767dbca537ff3524c1ba82747167969d"
	I1006 18:48:47.822661   14759 cri.go:89] found id: "14f703b6467c68e5f7dedd7a8ccf32a80265d117ab37abf1d2666a32caf5beda"
	I1006 18:48:47.822674   14759 cri.go:89] found id: "6172f7270c3d65c827bb8fa830d763bd379cc07be16d284bc4c59a48ef10709c"
	I1006 18:48:47.822678   14759 cri.go:89] found id: "8b64f9fe4af619af52e8d4111939a9703e02a5d3f0565c752bf3a85d2aee65ec"
	I1006 18:48:47.822681   14759 cri.go:89] found id: "aa4f571114f60f2401085b3f65a45f9b7779dbaf2d7716f39c56f2c69f8f358f"
	I1006 18:48:47.822688   14759 cri.go:89] found id: "e949d4e668015edc95cab568d722d15a1fd49365b917c054c9a337a848a93ac9"
	I1006 18:48:47.822691   14759 cri.go:89] found id: "eaea2302c49adf3c43a2f0f249dc8fe988f23818d829a10e9709a6f20858af7b"
	I1006 18:48:47.822694   14759 cri.go:89] found id: "156dcf2838ea3731822ceba3a07277518b4ba34e9c5891fb49941d98f1c8b0f7"
	I1006 18:48:47.822697   14759 cri.go:89] found id: "f77ff0e3011bc38bba6de70627d082e5a5194d7fefb407f7b46b7b0c832849e8"
	I1006 18:48:47.822700   14759 cri.go:89] found id: "47db75a0747be97ad93e878b9fbe0acf228502f6a1061340fa99f4c190a83619"
	I1006 18:48:47.822703   14759 cri.go:89] found id: "c540f260b155ab1508ef8fdf1254b0816d0e606eb67ae59f3b2feab7ea526b88"
	I1006 18:48:47.822708   14759 cri.go:89] found id: "d982508a1963aa0737bdefbe4ed7bcad12d4779b68c0ee3b75aa5718e1e8cd6f"
	I1006 18:48:47.822711   14759 cri.go:89] found id: "4a454595ca74de219ba83b946bf8f8f21366c2454667c8f983e1248a703aee58"
	I1006 18:48:47.822714   14759 cri.go:89] found id: "1ee286915381b5158fa5a27a330f7e53f554ce0fe7d429c57ec3ea2868775dc6"
	I1006 18:48:47.822717   14759 cri.go:89] found id: "50d122dfe0df6b0e19a330b8d906a26a7160818c3c2342f861b94db61dbde4ce"
	I1006 18:48:47.822720   14759 cri.go:89] found id: "5c527c40e2db3acd1fa72c315739a326754a5efad7aa6b520cc2b554f0eb3c21"
	I1006 18:48:47.822725   14759 cri.go:89] found id: "8f11330482798f65fa8bce0a640073ba69894daff8d3b583e82b1d331b4098ea"
	I1006 18:48:47.822728   14759 cri.go:89] found id: "4b78f6f126d3c9aae8097883cf29eb9096f58eed0eade4dc15720c72d0617907"
	I1006 18:48:47.822731   14759 cri.go:89] found id: "c436c27fc179e1d50b93c6d7a510f2ccc24e197558a2826868724e75f1d2d96f"
	I1006 18:48:47.822734   14759 cri.go:89] found id: "9bb11c78f552514a88dbee09642e0026e29ed4e8e4982ce305d891d657f27d0b"
	I1006 18:48:47.822738   14759 cri.go:89] found id: ""
	I1006 18:48:47.822791   14759 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 18:48:47.838387   14759 out.go:203] 
	W1006 18:48:47.841546   14759 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:48:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:48:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1006 18:48:47.841573   14759 out.go:285] * 
	* 
	W1006 18:48:47.845542   14759 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 18:48:47.848418   14759 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-442328 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-442328 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-442328 addons disable ingress --alsologtostderr -v=1: exit status 11 (272.928064ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 18:48:47.908658   14803 out.go:360] Setting OutFile to fd 1 ...
	I1006 18:48:47.908881   14803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:48:47.908896   14803 out.go:374] Setting ErrFile to fd 2...
	I1006 18:48:47.908901   14803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:48:47.909204   14803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 18:48:47.909553   14803 mustload.go:65] Loading cluster: addons-442328
	I1006 18:48:47.909921   14803 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:48:47.909933   14803 addons.go:606] checking whether the cluster is paused
	I1006 18:48:47.910032   14803 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:48:47.910047   14803 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:48:47.910603   14803 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:48:47.928454   14803 ssh_runner.go:195] Run: systemctl --version
	I1006 18:48:47.928621   14803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:48:47.952123   14803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:48:48.061620   14803 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 18:48:48.061878   14803 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 18:48:48.096234   14803 cri.go:89] found id: "d2d6e251cbbef910600098481d01e442740aa4bc23af4d5aff6cc6ebbad68024"
	I1006 18:48:48.096301   14803 cri.go:89] found id: "ebe3bd43a396ef825f9ce2383a8f854705eb0fe8cc731998dc8e54caa0c556da"
	I1006 18:48:48.096320   14803 cri.go:89] found id: "4cd182a7b457fdd19f5abf043db3fae987b3a227baa70e7cd0251be02e768d79"
	I1006 18:48:48.096339   14803 cri.go:89] found id: "08ff7320e4ddcb6923c21c744037216228cafd072dc09cfe448a83bc44918154"
	I1006 18:48:48.096359   14803 cri.go:89] found id: "9ca99ca5ed94b679d8bf6fa05d9e992d767dbca537ff3524c1ba82747167969d"
	I1006 18:48:48.096389   14803 cri.go:89] found id: "14f703b6467c68e5f7dedd7a8ccf32a80265d117ab37abf1d2666a32caf5beda"
	I1006 18:48:48.096411   14803 cri.go:89] found id: "6172f7270c3d65c827bb8fa830d763bd379cc07be16d284bc4c59a48ef10709c"
	I1006 18:48:48.096430   14803 cri.go:89] found id: "8b64f9fe4af619af52e8d4111939a9703e02a5d3f0565c752bf3a85d2aee65ec"
	I1006 18:48:48.096448   14803 cri.go:89] found id: "aa4f571114f60f2401085b3f65a45f9b7779dbaf2d7716f39c56f2c69f8f358f"
	I1006 18:48:48.096469   14803 cri.go:89] found id: "e949d4e668015edc95cab568d722d15a1fd49365b917c054c9a337a848a93ac9"
	I1006 18:48:48.096497   14803 cri.go:89] found id: "eaea2302c49adf3c43a2f0f249dc8fe988f23818d829a10e9709a6f20858af7b"
	I1006 18:48:48.096523   14803 cri.go:89] found id: "156dcf2838ea3731822ceba3a07277518b4ba34e9c5891fb49941d98f1c8b0f7"
	I1006 18:48:48.096534   14803 cri.go:89] found id: "f77ff0e3011bc38bba6de70627d082e5a5194d7fefb407f7b46b7b0c832849e8"
	I1006 18:48:48.096537   14803 cri.go:89] found id: "47db75a0747be97ad93e878b9fbe0acf228502f6a1061340fa99f4c190a83619"
	I1006 18:48:48.096541   14803 cri.go:89] found id: "c540f260b155ab1508ef8fdf1254b0816d0e606eb67ae59f3b2feab7ea526b88"
	I1006 18:48:48.096546   14803 cri.go:89] found id: "d982508a1963aa0737bdefbe4ed7bcad12d4779b68c0ee3b75aa5718e1e8cd6f"
	I1006 18:48:48.096549   14803 cri.go:89] found id: "4a454595ca74de219ba83b946bf8f8f21366c2454667c8f983e1248a703aee58"
	I1006 18:48:48.096552   14803 cri.go:89] found id: "1ee286915381b5158fa5a27a330f7e53f554ce0fe7d429c57ec3ea2868775dc6"
	I1006 18:48:48.096555   14803 cri.go:89] found id: "50d122dfe0df6b0e19a330b8d906a26a7160818c3c2342f861b94db61dbde4ce"
	I1006 18:48:48.096559   14803 cri.go:89] found id: "5c527c40e2db3acd1fa72c315739a326754a5efad7aa6b520cc2b554f0eb3c21"
	I1006 18:48:48.096576   14803 cri.go:89] found id: "8f11330482798f65fa8bce0a640073ba69894daff8d3b583e82b1d331b4098ea"
	I1006 18:48:48.096579   14803 cri.go:89] found id: "4b78f6f126d3c9aae8097883cf29eb9096f58eed0eade4dc15720c72d0617907"
	I1006 18:48:48.096582   14803 cri.go:89] found id: "c436c27fc179e1d50b93c6d7a510f2ccc24e197558a2826868724e75f1d2d96f"
	I1006 18:48:48.096585   14803 cri.go:89] found id: "9bb11c78f552514a88dbee09642e0026e29ed4e8e4982ce305d891d657f27d0b"
	I1006 18:48:48.096601   14803 cri.go:89] found id: ""
	I1006 18:48:48.096674   14803 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 18:48:48.112261   14803 out.go:203] 
	W1006 18:48:48.115234   14803 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:48:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:48:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1006 18:48:48.115259   14803 out.go:285] * 
	* 
	W1006 18:48:48.119130   14803 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 18:48:48.122122   14803 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-442328 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (144.02s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-6t8nb" [7e51b298-11df-4557-9e95-cbfb2f4fbf08] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004199477s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-442328 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-442328 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (269.566126ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 18:46:23.906182   12255 out.go:360] Setting OutFile to fd 1 ...
	I1006 18:46:23.906348   12255 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:46:23.906360   12255 out.go:374] Setting ErrFile to fd 2...
	I1006 18:46:23.906365   12255 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:46:23.906642   12255 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 18:46:23.906931   12255 mustload.go:65] Loading cluster: addons-442328
	I1006 18:46:23.907287   12255 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:46:23.907305   12255 addons.go:606] checking whether the cluster is paused
	I1006 18:46:23.907405   12255 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:46:23.907423   12255 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:46:23.908026   12255 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:46:23.931414   12255 ssh_runner.go:195] Run: systemctl --version
	I1006 18:46:23.931473   12255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:46:23.949073   12255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:46:24.046695   12255 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 18:46:24.046832   12255 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 18:46:24.078222   12255 cri.go:89] found id: "ebe3bd43a396ef825f9ce2383a8f854705eb0fe8cc731998dc8e54caa0c556da"
	I1006 18:46:24.078245   12255 cri.go:89] found id: "4cd182a7b457fdd19f5abf043db3fae987b3a227baa70e7cd0251be02e768d79"
	I1006 18:46:24.078250   12255 cri.go:89] found id: "08ff7320e4ddcb6923c21c744037216228cafd072dc09cfe448a83bc44918154"
	I1006 18:46:24.078254   12255 cri.go:89] found id: "9ca99ca5ed94b679d8bf6fa05d9e992d767dbca537ff3524c1ba82747167969d"
	I1006 18:46:24.078258   12255 cri.go:89] found id: "14f703b6467c68e5f7dedd7a8ccf32a80265d117ab37abf1d2666a32caf5beda"
	I1006 18:46:24.078261   12255 cri.go:89] found id: "6172f7270c3d65c827bb8fa830d763bd379cc07be16d284bc4c59a48ef10709c"
	I1006 18:46:24.078264   12255 cri.go:89] found id: "8b64f9fe4af619af52e8d4111939a9703e02a5d3f0565c752bf3a85d2aee65ec"
	I1006 18:46:24.078267   12255 cri.go:89] found id: "aa4f571114f60f2401085b3f65a45f9b7779dbaf2d7716f39c56f2c69f8f358f"
	I1006 18:46:24.078270   12255 cri.go:89] found id: "e949d4e668015edc95cab568d722d15a1fd49365b917c054c9a337a848a93ac9"
	I1006 18:46:24.078277   12255 cri.go:89] found id: "eaea2302c49adf3c43a2f0f249dc8fe988f23818d829a10e9709a6f20858af7b"
	I1006 18:46:24.078283   12255 cri.go:89] found id: "156dcf2838ea3731822ceba3a07277518b4ba34e9c5891fb49941d98f1c8b0f7"
	I1006 18:46:24.078290   12255 cri.go:89] found id: "f77ff0e3011bc38bba6de70627d082e5a5194d7fefb407f7b46b7b0c832849e8"
	I1006 18:46:24.078297   12255 cri.go:89] found id: "47db75a0747be97ad93e878b9fbe0acf228502f6a1061340fa99f4c190a83619"
	I1006 18:46:24.078300   12255 cri.go:89] found id: "c540f260b155ab1508ef8fdf1254b0816d0e606eb67ae59f3b2feab7ea526b88"
	I1006 18:46:24.078304   12255 cri.go:89] found id: "d982508a1963aa0737bdefbe4ed7bcad12d4779b68c0ee3b75aa5718e1e8cd6f"
	I1006 18:46:24.078309   12255 cri.go:89] found id: "4a454595ca74de219ba83b946bf8f8f21366c2454667c8f983e1248a703aee58"
	I1006 18:46:24.078315   12255 cri.go:89] found id: "1ee286915381b5158fa5a27a330f7e53f554ce0fe7d429c57ec3ea2868775dc6"
	I1006 18:46:24.078321   12255 cri.go:89] found id: "50d122dfe0df6b0e19a330b8d906a26a7160818c3c2342f861b94db61dbde4ce"
	I1006 18:46:24.078324   12255 cri.go:89] found id: "5c527c40e2db3acd1fa72c315739a326754a5efad7aa6b520cc2b554f0eb3c21"
	I1006 18:46:24.078327   12255 cri.go:89] found id: "8f11330482798f65fa8bce0a640073ba69894daff8d3b583e82b1d331b4098ea"
	I1006 18:46:24.078332   12255 cri.go:89] found id: "4b78f6f126d3c9aae8097883cf29eb9096f58eed0eade4dc15720c72d0617907"
	I1006 18:46:24.078335   12255 cri.go:89] found id: "c436c27fc179e1d50b93c6d7a510f2ccc24e197558a2826868724e75f1d2d96f"
	I1006 18:46:24.078339   12255 cri.go:89] found id: "9bb11c78f552514a88dbee09642e0026e29ed4e8e4982ce305d891d657f27d0b"
	I1006 18:46:24.078342   12255 cri.go:89] found id: ""
	I1006 18:46:24.078394   12255 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 18:46:24.093654   12255 out.go:203] 
	W1006 18:46:24.096706   12255 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:46:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:46:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1006 18:46:24.096730   12255 out.go:285] * 
	* 
	W1006 18:46:24.100530   12255 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 18:46:24.105535   12255 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-442328 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 5.200361ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-swsbd" [d28f1e31-8b2f-42c5-b7a2-a1662d0c5412] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003689337s
addons_test.go:463: (dbg) Run:  kubectl --context addons-442328 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-442328 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-442328 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (324.987179ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 18:46:17.574228   12097 out.go:360] Setting OutFile to fd 1 ...
	I1006 18:46:17.574365   12097 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:46:17.574370   12097 out.go:374] Setting ErrFile to fd 2...
	I1006 18:46:17.574374   12097 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:46:17.574650   12097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 18:46:17.574924   12097 mustload.go:65] Loading cluster: addons-442328
	I1006 18:46:17.575279   12097 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:46:17.575290   12097 addons.go:606] checking whether the cluster is paused
	I1006 18:46:17.575389   12097 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:46:17.575403   12097 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:46:17.575990   12097 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:46:17.605271   12097 ssh_runner.go:195] Run: systemctl --version
	I1006 18:46:17.605328   12097 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:46:17.625140   12097 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:46:17.731783   12097 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 18:46:17.731878   12097 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 18:46:17.771296   12097 cri.go:89] found id: "ebe3bd43a396ef825f9ce2383a8f854705eb0fe8cc731998dc8e54caa0c556da"
	I1006 18:46:17.771321   12097 cri.go:89] found id: "4cd182a7b457fdd19f5abf043db3fae987b3a227baa70e7cd0251be02e768d79"
	I1006 18:46:17.771327   12097 cri.go:89] found id: "08ff7320e4ddcb6923c21c744037216228cafd072dc09cfe448a83bc44918154"
	I1006 18:46:17.771331   12097 cri.go:89] found id: "9ca99ca5ed94b679d8bf6fa05d9e992d767dbca537ff3524c1ba82747167969d"
	I1006 18:46:17.771334   12097 cri.go:89] found id: "14f703b6467c68e5f7dedd7a8ccf32a80265d117ab37abf1d2666a32caf5beda"
	I1006 18:46:17.771338   12097 cri.go:89] found id: "6172f7270c3d65c827bb8fa830d763bd379cc07be16d284bc4c59a48ef10709c"
	I1006 18:46:17.771341   12097 cri.go:89] found id: "8b64f9fe4af619af52e8d4111939a9703e02a5d3f0565c752bf3a85d2aee65ec"
	I1006 18:46:17.771344   12097 cri.go:89] found id: "aa4f571114f60f2401085b3f65a45f9b7779dbaf2d7716f39c56f2c69f8f358f"
	I1006 18:46:17.771347   12097 cri.go:89] found id: "e949d4e668015edc95cab568d722d15a1fd49365b917c054c9a337a848a93ac9"
	I1006 18:46:17.771353   12097 cri.go:89] found id: "eaea2302c49adf3c43a2f0f249dc8fe988f23818d829a10e9709a6f20858af7b"
	I1006 18:46:17.771356   12097 cri.go:89] found id: "156dcf2838ea3731822ceba3a07277518b4ba34e9c5891fb49941d98f1c8b0f7"
	I1006 18:46:17.771359   12097 cri.go:89] found id: "f77ff0e3011bc38bba6de70627d082e5a5194d7fefb407f7b46b7b0c832849e8"
	I1006 18:46:17.771362   12097 cri.go:89] found id: "47db75a0747be97ad93e878b9fbe0acf228502f6a1061340fa99f4c190a83619"
	I1006 18:46:17.771366   12097 cri.go:89] found id: "c540f260b155ab1508ef8fdf1254b0816d0e606eb67ae59f3b2feab7ea526b88"
	I1006 18:46:17.771369   12097 cri.go:89] found id: "d982508a1963aa0737bdefbe4ed7bcad12d4779b68c0ee3b75aa5718e1e8cd6f"
	I1006 18:46:17.771378   12097 cri.go:89] found id: "4a454595ca74de219ba83b946bf8f8f21366c2454667c8f983e1248a703aee58"
	I1006 18:46:17.771385   12097 cri.go:89] found id: "1ee286915381b5158fa5a27a330f7e53f554ce0fe7d429c57ec3ea2868775dc6"
	I1006 18:46:17.771394   12097 cri.go:89] found id: "50d122dfe0df6b0e19a330b8d906a26a7160818c3c2342f861b94db61dbde4ce"
	I1006 18:46:17.771398   12097 cri.go:89] found id: "5c527c40e2db3acd1fa72c315739a326754a5efad7aa6b520cc2b554f0eb3c21"
	I1006 18:46:17.771401   12097 cri.go:89] found id: "8f11330482798f65fa8bce0a640073ba69894daff8d3b583e82b1d331b4098ea"
	I1006 18:46:17.771405   12097 cri.go:89] found id: "4b78f6f126d3c9aae8097883cf29eb9096f58eed0eade4dc15720c72d0617907"
	I1006 18:46:17.771409   12097 cri.go:89] found id: "c436c27fc179e1d50b93c6d7a510f2ccc24e197558a2826868724e75f1d2d96f"
	I1006 18:46:17.771413   12097 cri.go:89] found id: "9bb11c78f552514a88dbee09642e0026e29ed4e8e4982ce305d891d657f27d0b"
	I1006 18:46:17.771416   12097 cri.go:89] found id: ""
	I1006 18:46:17.771465   12097 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 18:46:17.814416   12097 out.go:203] 
	W1006 18:46:17.820320   12097 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:46:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:46:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1006 18:46:17.820393   12097 out.go:285] * 
	* 
	W1006 18:46:17.824829   12097 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 18:46:17.828693   12097 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-442328 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.45s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.31s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1006 18:46:00.675056    4350 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1006 18:46:00.678733    4350 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1006 18:46:00.678758    4350 kapi.go:107] duration metric: took 3.719411ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.728897ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-442328 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-442328 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [79322a38-6201-4204-9763-08bfba8aa382] Pending
helpers_test.go:352: "task-pv-pod" [79322a38-6201-4204-9763-08bfba8aa382] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [79322a38-6201-4204-9763-08bfba8aa382] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003271851s
addons_test.go:572: (dbg) Run:  kubectl --context addons-442328 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-442328 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-442328 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-442328 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-442328 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-442328 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-442328 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [b534812a-35c5-41b2-ab14-97ba7a613b28] Pending
helpers_test.go:352: "task-pv-pod-restore" [b534812a-35c5-41b2-ab14-97ba7a613b28] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [b534812a-35c5-41b2-ab14-97ba7a613b28] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003973112s
addons_test.go:614: (dbg) Run:  kubectl --context addons-442328 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-442328 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-442328 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-442328 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-442328 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (261.71396ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 18:46:47.519018   12980 out.go:360] Setting OutFile to fd 1 ...
	I1006 18:46:47.519230   12980 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:46:47.519242   12980 out.go:374] Setting ErrFile to fd 2...
	I1006 18:46:47.519248   12980 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:46:47.519501   12980 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 18:46:47.519815   12980 mustload.go:65] Loading cluster: addons-442328
	I1006 18:46:47.520395   12980 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:46:47.520413   12980 addons.go:606] checking whether the cluster is paused
	I1006 18:46:47.520515   12980 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:46:47.520532   12980 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:46:47.521301   12980 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:46:47.542196   12980 ssh_runner.go:195] Run: systemctl --version
	I1006 18:46:47.542284   12980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:46:47.561562   12980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:46:47.658392   12980 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 18:46:47.658558   12980 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 18:46:47.686902   12980 cri.go:89] found id: "ebe3bd43a396ef825f9ce2383a8f854705eb0fe8cc731998dc8e54caa0c556da"
	I1006 18:46:47.686924   12980 cri.go:89] found id: "4cd182a7b457fdd19f5abf043db3fae987b3a227baa70e7cd0251be02e768d79"
	I1006 18:46:47.686929   12980 cri.go:89] found id: "08ff7320e4ddcb6923c21c744037216228cafd072dc09cfe448a83bc44918154"
	I1006 18:46:47.686934   12980 cri.go:89] found id: "9ca99ca5ed94b679d8bf6fa05d9e992d767dbca537ff3524c1ba82747167969d"
	I1006 18:46:47.686938   12980 cri.go:89] found id: "14f703b6467c68e5f7dedd7a8ccf32a80265d117ab37abf1d2666a32caf5beda"
	I1006 18:46:47.686942   12980 cri.go:89] found id: "6172f7270c3d65c827bb8fa830d763bd379cc07be16d284bc4c59a48ef10709c"
	I1006 18:46:47.686968   12980 cri.go:89] found id: "8b64f9fe4af619af52e8d4111939a9703e02a5d3f0565c752bf3a85d2aee65ec"
	I1006 18:46:47.686978   12980 cri.go:89] found id: "aa4f571114f60f2401085b3f65a45f9b7779dbaf2d7716f39c56f2c69f8f358f"
	I1006 18:46:47.686983   12980 cri.go:89] found id: "e949d4e668015edc95cab568d722d15a1fd49365b917c054c9a337a848a93ac9"
	I1006 18:46:47.686990   12980 cri.go:89] found id: "eaea2302c49adf3c43a2f0f249dc8fe988f23818d829a10e9709a6f20858af7b"
	I1006 18:46:47.687004   12980 cri.go:89] found id: "156dcf2838ea3731822ceba3a07277518b4ba34e9c5891fb49941d98f1c8b0f7"
	I1006 18:46:47.687008   12980 cri.go:89] found id: "f77ff0e3011bc38bba6de70627d082e5a5194d7fefb407f7b46b7b0c832849e8"
	I1006 18:46:47.687011   12980 cri.go:89] found id: "47db75a0747be97ad93e878b9fbe0acf228502f6a1061340fa99f4c190a83619"
	I1006 18:46:47.687015   12980 cri.go:89] found id: "c540f260b155ab1508ef8fdf1254b0816d0e606eb67ae59f3b2feab7ea526b88"
	I1006 18:46:47.687018   12980 cri.go:89] found id: "d982508a1963aa0737bdefbe4ed7bcad12d4779b68c0ee3b75aa5718e1e8cd6f"
	I1006 18:46:47.687024   12980 cri.go:89] found id: "4a454595ca74de219ba83b946bf8f8f21366c2454667c8f983e1248a703aee58"
	I1006 18:46:47.687046   12980 cri.go:89] found id: "1ee286915381b5158fa5a27a330f7e53f554ce0fe7d429c57ec3ea2868775dc6"
	I1006 18:46:47.687064   12980 cri.go:89] found id: "50d122dfe0df6b0e19a330b8d906a26a7160818c3c2342f861b94db61dbde4ce"
	I1006 18:46:47.687071   12980 cri.go:89] found id: "5c527c40e2db3acd1fa72c315739a326754a5efad7aa6b520cc2b554f0eb3c21"
	I1006 18:46:47.687075   12980 cri.go:89] found id: "8f11330482798f65fa8bce0a640073ba69894daff8d3b583e82b1d331b4098ea"
	I1006 18:46:47.687081   12980 cri.go:89] found id: "4b78f6f126d3c9aae8097883cf29eb9096f58eed0eade4dc15720c72d0617907"
	I1006 18:46:47.687090   12980 cri.go:89] found id: "c436c27fc179e1d50b93c6d7a510f2ccc24e197558a2826868724e75f1d2d96f"
	I1006 18:46:47.687094   12980 cri.go:89] found id: "9bb11c78f552514a88dbee09642e0026e29ed4e8e4982ce305d891d657f27d0b"
	I1006 18:46:47.687097   12980 cri.go:89] found id: ""
	I1006 18:46:47.687161   12980 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 18:46:47.703237   12980 out.go:203] 
	W1006 18:46:47.706178   12980 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:46:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:46:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1006 18:46:47.706203   12980 out.go:285] * 
	* 
	W1006 18:46:47.710026   12980 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 18:46:47.712983   12980 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-442328 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-442328 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-442328 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (263.751794ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 18:46:47.772275   13023 out.go:360] Setting OutFile to fd 1 ...
	I1006 18:46:47.772523   13023 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:46:47.772573   13023 out.go:374] Setting ErrFile to fd 2...
	I1006 18:46:47.772594   13023 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:46:47.772994   13023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 18:46:47.773528   13023 mustload.go:65] Loading cluster: addons-442328
	I1006 18:46:47.774338   13023 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:46:47.774399   13023 addons.go:606] checking whether the cluster is paused
	I1006 18:46:47.774637   13023 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:46:47.774694   13023 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:46:47.775903   13023 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:46:47.799035   13023 ssh_runner.go:195] Run: systemctl --version
	I1006 18:46:47.799086   13023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:46:47.818833   13023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:46:47.914622   13023 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 18:46:47.914711   13023 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 18:46:47.950878   13023 cri.go:89] found id: "ebe3bd43a396ef825f9ce2383a8f854705eb0fe8cc731998dc8e54caa0c556da"
	I1006 18:46:47.950903   13023 cri.go:89] found id: "4cd182a7b457fdd19f5abf043db3fae987b3a227baa70e7cd0251be02e768d79"
	I1006 18:46:47.950908   13023 cri.go:89] found id: "08ff7320e4ddcb6923c21c744037216228cafd072dc09cfe448a83bc44918154"
	I1006 18:46:47.950912   13023 cri.go:89] found id: "9ca99ca5ed94b679d8bf6fa05d9e992d767dbca537ff3524c1ba82747167969d"
	I1006 18:46:47.950916   13023 cri.go:89] found id: "14f703b6467c68e5f7dedd7a8ccf32a80265d117ab37abf1d2666a32caf5beda"
	I1006 18:46:47.950919   13023 cri.go:89] found id: "6172f7270c3d65c827bb8fa830d763bd379cc07be16d284bc4c59a48ef10709c"
	I1006 18:46:47.950922   13023 cri.go:89] found id: "8b64f9fe4af619af52e8d4111939a9703e02a5d3f0565c752bf3a85d2aee65ec"
	I1006 18:46:47.950925   13023 cri.go:89] found id: "aa4f571114f60f2401085b3f65a45f9b7779dbaf2d7716f39c56f2c69f8f358f"
	I1006 18:46:47.950929   13023 cri.go:89] found id: "e949d4e668015edc95cab568d722d15a1fd49365b917c054c9a337a848a93ac9"
	I1006 18:46:47.950935   13023 cri.go:89] found id: "eaea2302c49adf3c43a2f0f249dc8fe988f23818d829a10e9709a6f20858af7b"
	I1006 18:46:47.950938   13023 cri.go:89] found id: "156dcf2838ea3731822ceba3a07277518b4ba34e9c5891fb49941d98f1c8b0f7"
	I1006 18:46:47.950942   13023 cri.go:89] found id: "f77ff0e3011bc38bba6de70627d082e5a5194d7fefb407f7b46b7b0c832849e8"
	I1006 18:46:47.950945   13023 cri.go:89] found id: "47db75a0747be97ad93e878b9fbe0acf228502f6a1061340fa99f4c190a83619"
	I1006 18:46:47.950949   13023 cri.go:89] found id: "c540f260b155ab1508ef8fdf1254b0816d0e606eb67ae59f3b2feab7ea526b88"
	I1006 18:46:47.950952   13023 cri.go:89] found id: "d982508a1963aa0737bdefbe4ed7bcad12d4779b68c0ee3b75aa5718e1e8cd6f"
	I1006 18:46:47.950959   13023 cri.go:89] found id: "4a454595ca74de219ba83b946bf8f8f21366c2454667c8f983e1248a703aee58"
	I1006 18:46:47.950966   13023 cri.go:89] found id: "1ee286915381b5158fa5a27a330f7e53f554ce0fe7d429c57ec3ea2868775dc6"
	I1006 18:46:47.950970   13023 cri.go:89] found id: "50d122dfe0df6b0e19a330b8d906a26a7160818c3c2342f861b94db61dbde4ce"
	I1006 18:46:47.950973   13023 cri.go:89] found id: "5c527c40e2db3acd1fa72c315739a326754a5efad7aa6b520cc2b554f0eb3c21"
	I1006 18:46:47.950976   13023 cri.go:89] found id: "8f11330482798f65fa8bce0a640073ba69894daff8d3b583e82b1d331b4098ea"
	I1006 18:46:47.950981   13023 cri.go:89] found id: "4b78f6f126d3c9aae8097883cf29eb9096f58eed0eade4dc15720c72d0617907"
	I1006 18:46:47.950992   13023 cri.go:89] found id: "c436c27fc179e1d50b93c6d7a510f2ccc24e197558a2826868724e75f1d2d96f"
	I1006 18:46:47.950995   13023 cri.go:89] found id: "9bb11c78f552514a88dbee09642e0026e29ed4e8e4982ce305d891d657f27d0b"
	I1006 18:46:47.950998   13023 cri.go:89] found id: ""
	I1006 18:46:47.951049   13023 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 18:46:47.966737   13023 out.go:203] 
	W1006 18:46:47.969920   13023 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:46:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:46:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1006 18:46:47.969951   13023 out.go:285] * 
	* 
	W1006 18:46:47.973800   13023 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 18:46:47.976803   13023 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-442328 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (47.31s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-442328 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-442328 --alsologtostderr -v=1: exit status 11 (249.485687ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 18:45:57.309582   11220 out.go:360] Setting OutFile to fd 1 ...
	I1006 18:45:57.309737   11220 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:45:57.309748   11220 out.go:374] Setting ErrFile to fd 2...
	I1006 18:45:57.309754   11220 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:45:57.310340   11220 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 18:45:57.310647   11220 mustload.go:65] Loading cluster: addons-442328
	I1006 18:45:57.311004   11220 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:45:57.311021   11220 addons.go:606] checking whether the cluster is paused
	I1006 18:45:57.311123   11220 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:45:57.311142   11220 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:45:57.311657   11220 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:45:57.328540   11220 ssh_runner.go:195] Run: systemctl --version
	I1006 18:45:57.328593   11220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:45:57.346708   11220 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:45:57.442273   11220 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 18:45:57.442360   11220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 18:45:57.476717   11220 cri.go:89] found id: "ebe3bd43a396ef825f9ce2383a8f854705eb0fe8cc731998dc8e54caa0c556da"
	I1006 18:45:57.476736   11220 cri.go:89] found id: "4cd182a7b457fdd19f5abf043db3fae987b3a227baa70e7cd0251be02e768d79"
	I1006 18:45:57.476740   11220 cri.go:89] found id: "08ff7320e4ddcb6923c21c744037216228cafd072dc09cfe448a83bc44918154"
	I1006 18:45:57.476759   11220 cri.go:89] found id: "9ca99ca5ed94b679d8bf6fa05d9e992d767dbca537ff3524c1ba82747167969d"
	I1006 18:45:57.476764   11220 cri.go:89] found id: "14f703b6467c68e5f7dedd7a8ccf32a80265d117ab37abf1d2666a32caf5beda"
	I1006 18:45:57.476767   11220 cri.go:89] found id: "6172f7270c3d65c827bb8fa830d763bd379cc07be16d284bc4c59a48ef10709c"
	I1006 18:45:57.476771   11220 cri.go:89] found id: "8b64f9fe4af619af52e8d4111939a9703e02a5d3f0565c752bf3a85d2aee65ec"
	I1006 18:45:57.476774   11220 cri.go:89] found id: "aa4f571114f60f2401085b3f65a45f9b7779dbaf2d7716f39c56f2c69f8f358f"
	I1006 18:45:57.476777   11220 cri.go:89] found id: "e949d4e668015edc95cab568d722d15a1fd49365b917c054c9a337a848a93ac9"
	I1006 18:45:57.476785   11220 cri.go:89] found id: "eaea2302c49adf3c43a2f0f249dc8fe988f23818d829a10e9709a6f20858af7b"
	I1006 18:45:57.476789   11220 cri.go:89] found id: "156dcf2838ea3731822ceba3a07277518b4ba34e9c5891fb49941d98f1c8b0f7"
	I1006 18:45:57.476792   11220 cri.go:89] found id: "f77ff0e3011bc38bba6de70627d082e5a5194d7fefb407f7b46b7b0c832849e8"
	I1006 18:45:57.476795   11220 cri.go:89] found id: "47db75a0747be97ad93e878b9fbe0acf228502f6a1061340fa99f4c190a83619"
	I1006 18:45:57.476798   11220 cri.go:89] found id: "c540f260b155ab1508ef8fdf1254b0816d0e606eb67ae59f3b2feab7ea526b88"
	I1006 18:45:57.476801   11220 cri.go:89] found id: "d982508a1963aa0737bdefbe4ed7bcad12d4779b68c0ee3b75aa5718e1e8cd6f"
	I1006 18:45:57.476806   11220 cri.go:89] found id: "4a454595ca74de219ba83b946bf8f8f21366c2454667c8f983e1248a703aee58"
	I1006 18:45:57.476816   11220 cri.go:89] found id: "1ee286915381b5158fa5a27a330f7e53f554ce0fe7d429c57ec3ea2868775dc6"
	I1006 18:45:57.476821   11220 cri.go:89] found id: "50d122dfe0df6b0e19a330b8d906a26a7160818c3c2342f861b94db61dbde4ce"
	I1006 18:45:57.476823   11220 cri.go:89] found id: "5c527c40e2db3acd1fa72c315739a326754a5efad7aa6b520cc2b554f0eb3c21"
	I1006 18:45:57.476826   11220 cri.go:89] found id: "8f11330482798f65fa8bce0a640073ba69894daff8d3b583e82b1d331b4098ea"
	I1006 18:45:57.476830   11220 cri.go:89] found id: "4b78f6f126d3c9aae8097883cf29eb9096f58eed0eade4dc15720c72d0617907"
	I1006 18:45:57.476839   11220 cri.go:89] found id: "c436c27fc179e1d50b93c6d7a510f2ccc24e197558a2826868724e75f1d2d96f"
	I1006 18:45:57.476842   11220 cri.go:89] found id: "9bb11c78f552514a88dbee09642e0026e29ed4e8e4982ce305d891d657f27d0b"
	I1006 18:45:57.476844   11220 cri.go:89] found id: ""
	I1006 18:45:57.476897   11220 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 18:45:57.491962   11220 out.go:203] 
	W1006 18:45:57.494865   11220 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:45:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:45:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1006 18:45:57.494896   11220 out.go:285] * 
	* 
	W1006 18:45:57.498647   11220 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 18:45:57.501531   11220 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-442328 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-442328
helpers_test.go:243: (dbg) docker inspect addons-442328:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8c722e206d4345afe9840c25fc51924a9dba29c6ed70fc1f945ffae17f5dfd27",
	        "Created": "2025-10-06T18:43:15.291490596Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 5496,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T18:43:15.326921135Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/8c722e206d4345afe9840c25fc51924a9dba29c6ed70fc1f945ffae17f5dfd27/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8c722e206d4345afe9840c25fc51924a9dba29c6ed70fc1f945ffae17f5dfd27/hostname",
	        "HostsPath": "/var/lib/docker/containers/8c722e206d4345afe9840c25fc51924a9dba29c6ed70fc1f945ffae17f5dfd27/hosts",
	        "LogPath": "/var/lib/docker/containers/8c722e206d4345afe9840c25fc51924a9dba29c6ed70fc1f945ffae17f5dfd27/8c722e206d4345afe9840c25fc51924a9dba29c6ed70fc1f945ffae17f5dfd27-json.log",
	        "Name": "/addons-442328",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-442328:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-442328",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8c722e206d4345afe9840c25fc51924a9dba29c6ed70fc1f945ffae17f5dfd27",
	                "LowerDir": "/var/lib/docker/overlay2/f8a87c57fa681dec37b192b095c869341820ceba06c5aaccd958b28f6010eb9c-init/diff:/var/lib/docker/overlay2/950dad8a7486c25883a9845454dded3949e8f3a53166d005e6749ff210bcab80/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f8a87c57fa681dec37b192b095c869341820ceba06c5aaccd958b28f6010eb9c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f8a87c57fa681dec37b192b095c869341820ceba06c5aaccd958b28f6010eb9c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f8a87c57fa681dec37b192b095c869341820ceba06c5aaccd958b28f6010eb9c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-442328",
	                "Source": "/var/lib/docker/volumes/addons-442328/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-442328",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-442328",
	                "name.minikube.sigs.k8s.io": "addons-442328",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5dedfdd0082a1a6a774e09d3db845f6e7f9ebdf4cff2de96d32aab0812c516f9",
	            "SandboxKey": "/var/run/docker/netns/5dedfdd0082a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-442328": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:88:24:be:c7:dc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fdf859d8abb0bd92a77b6e58cb3c4758719b7831e18c6319222c118bbb6e751f",
	                    "EndpointID": "a7ec9b4e92d6f099b561a3d1ead564eb71b2e42423303c12997b6d2c18b0d31a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-442328",
	                        "8c722e206d43"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-442328 -n addons-442328
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-442328 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-442328 logs -n 25: (1.45927677s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-612821 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-612821   │ jenkins │ v1.37.0 │ 06 Oct 25 18:41 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 06 Oct 25 18:42 UTC │ 06 Oct 25 18:42 UTC │
	│ delete  │ -p download-only-612821                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-612821   │ jenkins │ v1.37.0 │ 06 Oct 25 18:42 UTC │ 06 Oct 25 18:42 UTC │
	│ start   │ -o=json --download-only -p download-only-652012 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-652012   │ jenkins │ v1.37.0 │ 06 Oct 25 18:42 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 06 Oct 25 18:42 UTC │ 06 Oct 25 18:42 UTC │
	│ delete  │ -p download-only-652012                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-652012   │ jenkins │ v1.37.0 │ 06 Oct 25 18:42 UTC │ 06 Oct 25 18:42 UTC │
	│ delete  │ -p download-only-612821                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-612821   │ jenkins │ v1.37.0 │ 06 Oct 25 18:42 UTC │ 06 Oct 25 18:42 UTC │
	│ delete  │ -p download-only-652012                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-652012   │ jenkins │ v1.37.0 │ 06 Oct 25 18:42 UTC │ 06 Oct 25 18:42 UTC │
	│ start   │ --download-only -p download-docker-993189 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-993189 │ jenkins │ v1.37.0 │ 06 Oct 25 18:42 UTC │                     │
	│ delete  │ -p download-docker-993189                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-993189 │ jenkins │ v1.37.0 │ 06 Oct 25 18:42 UTC │ 06 Oct 25 18:42 UTC │
	│ start   │ --download-only -p binary-mirror-895506 --alsologtostderr --binary-mirror http://127.0.0.1:38653 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-895506   │ jenkins │ v1.37.0 │ 06 Oct 25 18:42 UTC │                     │
	│ delete  │ -p binary-mirror-895506                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-895506   │ jenkins │ v1.37.0 │ 06 Oct 25 18:42 UTC │ 06 Oct 25 18:42 UTC │
	│ addons  │ disable dashboard -p addons-442328                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-442328          │ jenkins │ v1.37.0 │ 06 Oct 25 18:42 UTC │                     │
	│ addons  │ enable dashboard -p addons-442328                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-442328          │ jenkins │ v1.37.0 │ 06 Oct 25 18:42 UTC │                     │
	│ start   │ -p addons-442328 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-442328          │ jenkins │ v1.37.0 │ 06 Oct 25 18:42 UTC │ 06 Oct 25 18:45 UTC │
	│ addons  │ addons-442328 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-442328          │ jenkins │ v1.37.0 │ 06 Oct 25 18:45 UTC │                     │
	│ addons  │ addons-442328 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-442328          │ jenkins │ v1.37.0 │ 06 Oct 25 18:45 UTC │                     │
	│ addons  │ enable headlamp -p addons-442328 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-442328          │ jenkins │ v1.37.0 │ 06 Oct 25 18:45 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 18:42:49
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 18:42:49.193168    5102 out.go:360] Setting OutFile to fd 1 ...
	I1006 18:42:49.193282    5102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:42:49.193291    5102 out.go:374] Setting ErrFile to fd 2...
	I1006 18:42:49.193296    5102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:42:49.193564    5102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 18:42:49.194017    5102 out.go:368] Setting JSON to false
	I1006 18:42:49.194742    5102 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1505,"bootTime":1759774665,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1006 18:42:49.194805    5102 start.go:140] virtualization:  
	I1006 18:42:49.198135    5102 out.go:179] * [addons-442328] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 18:42:49.201801    5102 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 18:42:49.201911    5102 notify.go:220] Checking for updates...
	I1006 18:42:49.207479    5102 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 18:42:49.210372    5102 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 18:42:49.213143    5102 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	I1006 18:42:49.215990    5102 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 18:42:49.218803    5102 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 18:42:49.221915    5102 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 18:42:49.241626    5102 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 18:42:49.241753    5102 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 18:42:49.307850    5102 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-06 18:42:49.298625415 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 18:42:49.307964    5102 docker.go:318] overlay module found
	I1006 18:42:49.311012    5102 out.go:179] * Using the docker driver based on user configuration
	I1006 18:42:49.313754    5102 start.go:304] selected driver: docker
	I1006 18:42:49.313773    5102 start.go:924] validating driver "docker" against <nil>
	I1006 18:42:49.313786    5102 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 18:42:49.314514    5102 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 18:42:49.368529    5102 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-06 18:42:49.35974875 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 18:42:49.368700    5102 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 18:42:49.368926    5102 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 18:42:49.371826    5102 out.go:179] * Using Docker driver with root privileges
	I1006 18:42:49.374533    5102 cni.go:84] Creating CNI manager for ""
	I1006 18:42:49.374596    5102 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 18:42:49.374609    5102 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 18:42:49.374685    5102 start.go:348] cluster config:
	{Name:addons-442328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-442328 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1006 18:42:49.377779    5102 out.go:179] * Starting "addons-442328" primary control-plane node in "addons-442328" cluster
	I1006 18:42:49.380637    5102 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 18:42:49.383578    5102 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 18:42:49.386470    5102 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 18:42:49.386508    5102 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 18:42:49.386523    5102 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1006 18:42:49.386532    5102 cache.go:58] Caching tarball of preloaded images
	I1006 18:42:49.386616    5102 preload.go:233] Found /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1006 18:42:49.386626    5102 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 18:42:49.386957    5102 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/config.json ...
	I1006 18:42:49.386987    5102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/config.json: {Name:mkc263948d35758166b9227c0ae8aa20bda1f9b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:42:49.403843    5102 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1006 18:42:49.404006    5102 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1006 18:42:49.404032    5102 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1006 18:42:49.404037    5102 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1006 18:42:49.404044    5102 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1006 18:42:49.404049    5102 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
	I1006 18:43:07.500560    5102 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
	I1006 18:43:07.500606    5102 cache.go:232] Successfully downloaded all kic artifacts
	I1006 18:43:07.500650    5102 start.go:360] acquireMachinesLock for addons-442328: {Name:mk9b46ab2957a6d941347e6c3488c1e2b2f2ea3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 18:43:07.500763    5102 start.go:364] duration metric: took 90.013µs to acquireMachinesLock for "addons-442328"
	I1006 18:43:07.500792    5102 start.go:93] Provisioning new machine with config: &{Name:addons-442328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-442328 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 18:43:07.500860    5102 start.go:125] createHost starting for "" (driver="docker")
	I1006 18:43:07.502741    5102 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1006 18:43:07.502957    5102 start.go:159] libmachine.API.Create for "addons-442328" (driver="docker")
	I1006 18:43:07.503002    5102 client.go:168] LocalClient.Create starting
	I1006 18:43:07.503115    5102 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem
	I1006 18:43:07.790167    5102 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem
	I1006 18:43:08.322291    5102 cli_runner.go:164] Run: docker network inspect addons-442328 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 18:43:08.338122    5102 cli_runner.go:211] docker network inspect addons-442328 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 18:43:08.338216    5102 network_create.go:284] running [docker network inspect addons-442328] to gather additional debugging logs...
	I1006 18:43:08.338237    5102 cli_runner.go:164] Run: docker network inspect addons-442328
	W1006 18:43:08.354780    5102 cli_runner.go:211] docker network inspect addons-442328 returned with exit code 1
	I1006 18:43:08.354812    5102 network_create.go:287] error running [docker network inspect addons-442328]: docker network inspect addons-442328: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-442328 not found
	I1006 18:43:08.354826    5102 network_create.go:289] output of [docker network inspect addons-442328]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-442328 not found
	
	** /stderr **
	I1006 18:43:08.354940    5102 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 18:43:08.371239    5102 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001946930}
	I1006 18:43:08.371280    5102 network_create.go:124] attempt to create docker network addons-442328 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1006 18:43:08.371335    5102 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-442328 addons-442328
	I1006 18:43:08.425026    5102 network_create.go:108] docker network addons-442328 192.168.49.0/24 created
	I1006 18:43:08.425062    5102 kic.go:121] calculated static IP "192.168.49.2" for the "addons-442328" container
	I1006 18:43:08.425144    5102 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 18:43:08.439785    5102 cli_runner.go:164] Run: docker volume create addons-442328 --label name.minikube.sigs.k8s.io=addons-442328 --label created_by.minikube.sigs.k8s.io=true
	I1006 18:43:08.456708    5102 oci.go:103] Successfully created a docker volume addons-442328
	I1006 18:43:08.456801    5102 cli_runner.go:164] Run: docker run --rm --name addons-442328-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-442328 --entrypoint /usr/bin/test -v addons-442328:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 18:43:10.820901    5102 cli_runner.go:217] Completed: docker run --rm --name addons-442328-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-442328 --entrypoint /usr/bin/test -v addons-442328:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (2.364061654s)
	I1006 18:43:10.820928    5102 oci.go:107] Successfully prepared a docker volume addons-442328
	I1006 18:43:10.820963    5102 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 18:43:10.820980    5102 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 18:43:10.821036    5102 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-442328:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 18:43:15.222819    5102 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-442328:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.401748315s)
	I1006 18:43:15.222849    5102 kic.go:203] duration metric: took 4.401866087s to extract preloaded images to volume ...
	W1006 18:43:15.223006    5102 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1006 18:43:15.223123    5102 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 18:43:15.273689    5102 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-442328 --name addons-442328 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-442328 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-442328 --network addons-442328 --ip 192.168.49.2 --volume addons-442328:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 18:43:15.592683    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Running}}
	I1006 18:43:15.619290    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:15.639100    5102 cli_runner.go:164] Run: docker exec addons-442328 stat /var/lib/dpkg/alternatives/iptables
	I1006 18:43:15.703188    5102 oci.go:144] the created container "addons-442328" has a running status.
	I1006 18:43:15.703214    5102 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa...
	I1006 18:43:16.761927    5102 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 18:43:16.782342    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:16.798890    5102 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 18:43:16.798915    5102 kic_runner.go:114] Args: [docker exec --privileged addons-442328 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 18:43:16.839046    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:16.857168    5102 machine.go:93] provisionDockerMachine start ...
	I1006 18:43:16.857270    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:16.873873    5102 main.go:141] libmachine: Using SSH client type: native
	I1006 18:43:16.874197    5102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1006 18:43:16.874213    5102 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 18:43:16.874791    5102 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57412->127.0.0.1:32768: read: connection reset by peer
	I1006 18:43:20.007192    5102 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-442328
	
	I1006 18:43:20.007215    5102 ubuntu.go:182] provisioning hostname "addons-442328"
	I1006 18:43:20.007288    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:20.035645    5102 main.go:141] libmachine: Using SSH client type: native
	I1006 18:43:20.035993    5102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1006 18:43:20.036016    5102 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-442328 && echo "addons-442328" | sudo tee /etc/hostname
	I1006 18:43:20.176974    5102 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-442328
	
	I1006 18:43:20.177054    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:20.195363    5102 main.go:141] libmachine: Using SSH client type: native
	I1006 18:43:20.195682    5102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1006 18:43:20.195733    5102 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-442328' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-442328/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-442328' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 18:43:20.327949    5102 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 18:43:20.327978    5102 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-2540/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-2540/.minikube}
	I1006 18:43:20.328000    5102 ubuntu.go:190] setting up certificates
	I1006 18:43:20.328009    5102 provision.go:84] configureAuth start
	I1006 18:43:20.328081    5102 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-442328
	I1006 18:43:20.345715    5102 provision.go:143] copyHostCerts
	I1006 18:43:20.345800    5102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem (1123 bytes)
	I1006 18:43:20.345922    5102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem (1675 bytes)
	I1006 18:43:20.346021    5102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem (1082 bytes)
	I1006 18:43:20.346071    5102 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem org=jenkins.addons-442328 san=[127.0.0.1 192.168.49.2 addons-442328 localhost minikube]
	I1006 18:43:20.621280    5102 provision.go:177] copyRemoteCerts
	I1006 18:43:20.621347    5102 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 18:43:20.621387    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:20.638102    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:20.731529    5102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1006 18:43:20.748729    5102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 18:43:20.765667    5102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 18:43:20.783278    5102 provision.go:87] duration metric: took 455.244116ms to configureAuth
	I1006 18:43:20.783302    5102 ubuntu.go:206] setting minikube options for container-runtime
	I1006 18:43:20.783495    5102 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:43:20.783595    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:20.801662    5102 main.go:141] libmachine: Using SSH client type: native
	I1006 18:43:20.801962    5102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1006 18:43:20.801982    5102 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 18:43:21.042931    5102 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 18:43:21.042955    5102 machine.go:96] duration metric: took 4.185759304s to provisionDockerMachine
	I1006 18:43:21.042965    5102 client.go:171] duration metric: took 13.539951773s to LocalClient.Create
	I1006 18:43:21.042979    5102 start.go:167] duration metric: took 13.540022667s to libmachine.API.Create "addons-442328"
	I1006 18:43:21.042985    5102 start.go:293] postStartSetup for "addons-442328" (driver="docker")
	I1006 18:43:21.042995    5102 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 18:43:21.043063    5102 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 18:43:21.043108    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:21.061975    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:21.160682    5102 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 18:43:21.164093    5102 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 18:43:21.164122    5102 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 18:43:21.164133    5102 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/addons for local assets ...
	I1006 18:43:21.164243    5102 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/files for local assets ...
	I1006 18:43:21.164272    5102 start.go:296] duration metric: took 121.280542ms for postStartSetup
	I1006 18:43:21.164599    5102 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-442328
	I1006 18:43:21.183981    5102 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/config.json ...
	I1006 18:43:21.184289    5102 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 18:43:21.184339    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:21.201326    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:21.292655    5102 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 18:43:21.297268    5102 start.go:128] duration metric: took 13.796393502s to createHost
	I1006 18:43:21.297295    5102 start.go:83] releasing machines lock for "addons-442328", held for 13.796517805s
	I1006 18:43:21.297371    5102 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-442328
	I1006 18:43:21.318256    5102 ssh_runner.go:195] Run: cat /version.json
	I1006 18:43:21.318323    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:21.318600    5102 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 18:43:21.318667    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:21.344740    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:21.350698    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:21.529460    5102 ssh_runner.go:195] Run: systemctl --version
	I1006 18:43:21.535758    5102 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 18:43:21.570497    5102 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 18:43:21.574753    5102 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 18:43:21.574821    5102 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 18:43:21.603028    5102 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1006 18:43:21.603109    5102 start.go:495] detecting cgroup driver to use...
	I1006 18:43:21.603153    5102 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 18:43:21.603230    5102 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 18:43:21.620119    5102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 18:43:21.633155    5102 docker.go:218] disabling cri-docker service (if available) ...
	I1006 18:43:21.633216    5102 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 18:43:21.650420    5102 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 18:43:21.668847    5102 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 18:43:21.780099    5102 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 18:43:21.892978    5102 docker.go:234] disabling docker service ...
	I1006 18:43:21.893044    5102 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 18:43:21.913413    5102 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 18:43:21.926298    5102 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 18:43:22.035873    5102 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 18:43:22.168121    5102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 18:43:22.186851    5102 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 18:43:22.203194    5102 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 18:43:22.203311    5102 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 18:43:22.212806    5102 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 18:43:22.212931    5102 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 18:43:22.222253    5102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 18:43:22.231100    5102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 18:43:22.240003    5102 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 18:43:22.248216    5102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 18:43:22.257667    5102 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 18:43:22.271036    5102 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 18:43:22.279875    5102 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 18:43:22.287288    5102 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1006 18:43:22.287350    5102 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1006 18:43:22.300209    5102 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 18:43:22.308036    5102 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 18:43:22.424046    5102 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 18:43:22.550314    5102 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 18:43:22.550404    5102 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 18:43:22.554446    5102 start.go:563] Will wait 60s for crictl version
	I1006 18:43:22.554506    5102 ssh_runner.go:195] Run: which crictl
	I1006 18:43:22.558096    5102 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 18:43:22.586903    5102 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 18:43:22.587037    5102 ssh_runner.go:195] Run: crio --version
	I1006 18:43:22.614871    5102 ssh_runner.go:195] Run: crio --version
	I1006 18:43:22.648561    5102 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 18:43:22.651482    5102 cli_runner.go:164] Run: docker network inspect addons-442328 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 18:43:22.667826    5102 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 18:43:22.671674    5102 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 18:43:22.681129    5102 kubeadm.go:883] updating cluster {Name:addons-442328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-442328 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 18:43:22.681249    5102 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 18:43:22.681317    5102 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 18:43:22.711566    5102 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 18:43:22.711591    5102 crio.go:433] Images already preloaded, skipping extraction
	I1006 18:43:22.711650    5102 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 18:43:22.738118    5102 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 18:43:22.738144    5102 cache_images.go:85] Images are preloaded, skipping loading
	I1006 18:43:22.738152    5102 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1006 18:43:22.738235    5102 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-442328 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-442328 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 18:43:22.738323    5102 ssh_runner.go:195] Run: crio config
	I1006 18:43:22.801492    5102 cni.go:84] Creating CNI manager for ""
	I1006 18:43:22.801522    5102 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 18:43:22.801554    5102 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 18:43:22.801599    5102 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-442328 NodeName:addons-442328 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 18:43:22.801765    5102 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-442328"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 18:43:22.801862    5102 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 18:43:22.809500    5102 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 18:43:22.809565    5102 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 18:43:22.817199    5102 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1006 18:43:22.830052    5102 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 18:43:22.842977    5102 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1006 18:43:22.855903    5102 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 18:43:22.859462    5102 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 18:43:22.869332    5102 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 18:43:22.982062    5102 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 18:43:22.997007    5102 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328 for IP: 192.168.49.2
	I1006 18:43:22.997026    5102 certs.go:195] generating shared ca certs ...
	I1006 18:43:22.997042    5102 certs.go:227] acquiring lock for ca certs: {Name:mke29bef1b13829052576090d66d8864f7cbc64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:43:22.997202    5102 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key
	I1006 18:43:23.368698    5102 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt ...
	I1006 18:43:23.368731    5102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt: {Name:mka617cd9c96ec7552efe1c89ec4ced838347d5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:43:23.368949    5102 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key ...
	I1006 18:43:23.368964    5102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key: {Name:mke1c1853952a570e1a6b7df9f26798abd52a483 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:43:23.369052    5102 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key
	I1006 18:43:23.638628    5102 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt ...
	I1006 18:43:23.638656    5102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt: {Name:mk21b0d3f2c3741323f78c6e5b90fd5edf1600c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:43:23.638828    5102 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key ...
	I1006 18:43:23.638843    5102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key: {Name:mkc60242033f0c1489cf5efd77a0632df75dfd8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:43:23.638921    5102 certs.go:257] generating profile certs ...
	I1006 18:43:23.638977    5102 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.key
	I1006 18:43:23.638995    5102 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt with IP's: []
	I1006 18:43:24.368839    5102 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt ...
	I1006 18:43:24.368871    5102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: {Name:mk30bee72787735fc1483520ea973c848a6f59e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:43:24.369064    5102 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.key ...
	I1006 18:43:24.369078    5102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.key: {Name:mk07650efea402c2b338a2dbdaa79ebf4302f8c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:43:24.369162    5102 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/apiserver.key.6d1bc007
	I1006 18:43:24.369182    5102 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/apiserver.crt.6d1bc007 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1006 18:43:24.816514    5102 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/apiserver.crt.6d1bc007 ...
	I1006 18:43:24.816582    5102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/apiserver.crt.6d1bc007: {Name:mke12e9a0fca58dd6de1a580e6ee3de06c1467e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:43:24.816760    5102 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/apiserver.key.6d1bc007 ...
	I1006 18:43:24.816774    5102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/apiserver.key.6d1bc007: {Name:mk85db778aa25f6bed0146ae08bba8008aae1249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:43:24.816856    5102 certs.go:382] copying /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/apiserver.crt.6d1bc007 -> /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/apiserver.crt
	I1006 18:43:24.816945    5102 certs.go:386] copying /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/apiserver.key.6d1bc007 -> /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/apiserver.key
	I1006 18:43:24.817001    5102 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/proxy-client.key
	I1006 18:43:24.817016    5102 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/proxy-client.crt with IP's: []
	I1006 18:43:25.492664    5102 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/proxy-client.crt ...
	I1006 18:43:25.492692    5102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/proxy-client.crt: {Name:mk95252de734ce611d9878c6be63fb0c316d5a59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:43:25.492883    5102 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/proxy-client.key ...
	I1006 18:43:25.492896    5102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/proxy-client.key: {Name:mk2e81d3db589c57f42c7532f610f5b21bf55a13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:43:25.493088    5102 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 18:43:25.493137    5102 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem (1082 bytes)
	I1006 18:43:25.493170    5102 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem (1123 bytes)
	I1006 18:43:25.493197    5102 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem (1675 bytes)
	I1006 18:43:25.493782    5102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 18:43:25.512945    5102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 18:43:25.530839    5102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 18:43:25.548015    5102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 18:43:25.565350    5102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1006 18:43:25.582383    5102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 18:43:25.599050    5102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 18:43:25.616410    5102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 18:43:25.633030    5102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 18:43:25.650428    5102 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 18:43:25.662953    5102 ssh_runner.go:195] Run: openssl version
	I1006 18:43:25.669405    5102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 18:43:25.677483    5102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 18:43:25.680946    5102 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I1006 18:43:25.681014    5102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 18:43:25.721712    5102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 18:43:25.729644    5102 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 18:43:25.732884    5102 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 18:43:25.732971    5102 kubeadm.go:400] StartCluster: {Name:addons-442328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-442328 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 18:43:25.733054    5102 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 18:43:25.733117    5102 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 18:43:25.758993    5102 cri.go:89] found id: ""
	I1006 18:43:25.759067    5102 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 18:43:25.766606    5102 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 18:43:25.774282    5102 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 18:43:25.774370    5102 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 18:43:25.781909    5102 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 18:43:25.781929    5102 kubeadm.go:157] found existing configuration files:
	
	I1006 18:43:25.781978    5102 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 18:43:25.789728    5102 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 18:43:25.789816    5102 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 18:43:25.797046    5102 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 18:43:25.804925    5102 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 18:43:25.804991    5102 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 18:43:25.812277    5102 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 18:43:25.819865    5102 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 18:43:25.819969    5102 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 18:43:25.827177    5102 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 18:43:25.834703    5102 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 18:43:25.834788    5102 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 18:43:25.842070    5102 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 18:43:25.880030    5102 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 18:43:25.880333    5102 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 18:43:25.906326    5102 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 18:43:25.906400    5102 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1006 18:43:25.906443    5102 kubeadm.go:318] OS: Linux
	I1006 18:43:25.906495    5102 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 18:43:25.906553    5102 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1006 18:43:25.906608    5102 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 18:43:25.906662    5102 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 18:43:25.906717    5102 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 18:43:25.906769    5102 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 18:43:25.906820    5102 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 18:43:25.906874    5102 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 18:43:25.906926    5102 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1006 18:43:25.976304    5102 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 18:43:25.976421    5102 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 18:43:25.976530    5102 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 18:43:25.988267    5102 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 18:43:25.992666    5102 out.go:252]   - Generating certificates and keys ...
	I1006 18:43:25.992767    5102 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 18:43:25.992838    5102 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 18:43:26.215189    5102 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 18:43:26.500940    5102 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 18:43:26.771480    5102 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 18:43:26.934275    5102 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 18:43:27.092096    5102 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 18:43:27.092408    5102 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-442328 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 18:43:27.985084    5102 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 18:43:27.985285    5102 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-442328 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 18:43:28.126426    5102 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 18:43:28.547748    5102 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 18:43:28.948390    5102 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 18:43:28.948791    5102 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 18:43:29.062838    5102 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 18:43:29.801180    5102 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 18:43:30.310452    5102 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 18:43:30.621542    5102 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 18:43:30.906024    5102 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 18:43:30.906611    5102 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 18:43:30.909487    5102 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 18:43:30.912808    5102 out.go:252]   - Booting up control plane ...
	I1006 18:43:30.912913    5102 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 18:43:30.912999    5102 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 18:43:30.913708    5102 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 18:43:30.928792    5102 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 18:43:30.929132    5102 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 18:43:30.937826    5102 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 18:43:30.937933    5102 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 18:43:30.937998    5102 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 18:43:31.066546    5102 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 18:43:31.066692    5102 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 18:43:33.068057    5102 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001611919s
	I1006 18:43:33.071369    5102 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 18:43:33.071468    5102 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 18:43:33.071865    5102 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 18:43:33.071957    5102 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 18:43:36.139955    5102 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.06796515s
	I1006 18:43:37.385194    5102 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.313824885s
	I1006 18:43:39.073895    5102 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.002266281s
	I1006 18:43:39.093589    5102 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1006 18:43:39.109863    5102 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1006 18:43:39.127400    5102 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1006 18:43:39.127653    5102 kubeadm.go:318] [mark-control-plane] Marking the node addons-442328 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1006 18:43:39.139134    5102 kubeadm.go:318] [bootstrap-token] Using token: b915w2.j1hxlaumrltogjrr
	I1006 18:43:39.142199    5102 out.go:252]   - Configuring RBAC rules ...
	I1006 18:43:39.142342    5102 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1006 18:43:39.146669    5102 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1006 18:43:39.154447    5102 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1006 18:43:39.160608    5102 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1006 18:43:39.164753    5102 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1006 18:43:39.168766    5102 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1006 18:43:39.481302    5102 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1006 18:43:39.925083    5102 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1006 18:43:40.482597    5102 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1006 18:43:40.483691    5102 kubeadm.go:318] 
	I1006 18:43:40.483796    5102 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1006 18:43:40.483802    5102 kubeadm.go:318] 
	I1006 18:43:40.483879    5102 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1006 18:43:40.483884    5102 kubeadm.go:318] 
	I1006 18:43:40.483910    5102 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1006 18:43:40.483968    5102 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1006 18:43:40.484017    5102 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1006 18:43:40.484022    5102 kubeadm.go:318] 
	I1006 18:43:40.484075    5102 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1006 18:43:40.484079    5102 kubeadm.go:318] 
	I1006 18:43:40.484126    5102 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1006 18:43:40.484130    5102 kubeadm.go:318] 
	I1006 18:43:40.484182    5102 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1006 18:43:40.484256    5102 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1006 18:43:40.484323    5102 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1006 18:43:40.484328    5102 kubeadm.go:318] 
	I1006 18:43:40.484412    5102 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1006 18:43:40.484518    5102 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1006 18:43:40.484524    5102 kubeadm.go:318] 
	I1006 18:43:40.484607    5102 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token b915w2.j1hxlaumrltogjrr \
	I1006 18:43:40.484716    5102 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:201c13f59a2d25d619915dd6f8e3b060debb373b102f8c8209c331adce5a89dd \
	I1006 18:43:40.484737    5102 kubeadm.go:318] 	--control-plane 
	I1006 18:43:40.484742    5102 kubeadm.go:318] 
	I1006 18:43:40.484831    5102 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1006 18:43:40.484836    5102 kubeadm.go:318] 
	I1006 18:43:40.484917    5102 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token b915w2.j1hxlaumrltogjrr \
	I1006 18:43:40.485025    5102 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:201c13f59a2d25d619915dd6f8e3b060debb373b102f8c8209c331adce5a89dd 
	I1006 18:43:40.488571    5102 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1006 18:43:40.488823    5102 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1006 18:43:40.488936    5102 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 18:43:40.488955    5102 cni.go:84] Creating CNI manager for ""
	I1006 18:43:40.488963    5102 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 18:43:40.491941    5102 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1006 18:43:40.494826    5102 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1006 18:43:40.498949    5102 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1006 18:43:40.498972    5102 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1006 18:43:40.512740    5102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1006 18:43:40.782762    5102 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1006 18:43:40.782854    5102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 18:43:40.782907    5102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-442328 minikube.k8s.io/updated_at=2025_10_06T18_43_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81 minikube.k8s.io/name=addons-442328 minikube.k8s.io/primary=true
	I1006 18:43:40.922860    5102 ops.go:34] apiserver oom_adj: -16
	I1006 18:43:40.935117    5102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 18:43:41.435222    5102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 18:43:41.936116    5102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 18:43:42.435186    5102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 18:43:42.935225    5102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 18:43:43.435210    5102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 18:43:43.935200    5102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 18:43:44.435820    5102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 18:43:44.935133    5102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 18:43:45.435395    5102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 18:43:45.589482    5102 kubeadm.go:1113] duration metric: took 4.806676019s to wait for elevateKubeSystemPrivileges
	I1006 18:43:45.589517    5102 kubeadm.go:402] duration metric: took 19.856550715s to StartCluster
	I1006 18:43:45.589534    5102 settings.go:142] acquiring lock: {Name:mkaade96c66593c653256adaeb0a3029ca60e0c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:43:45.589656    5102 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 18:43:45.590081    5102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/kubeconfig: {Name:mkf8cce8f9dace5c4d41967d6ad8df5e03ba53f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:43:45.590268    5102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1006 18:43:45.590301    5102 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 18:43:45.590513    5102 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:43:45.590556    5102 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1006 18:43:45.590660    5102 addons.go:69] Setting yakd=true in profile "addons-442328"
	I1006 18:43:45.590686    5102 addons.go:238] Setting addon yakd=true in "addons-442328"
	I1006 18:43:45.590712    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.591158    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.591340    5102 addons.go:69] Setting inspektor-gadget=true in profile "addons-442328"
	I1006 18:43:45.591365    5102 addons.go:238] Setting addon inspektor-gadget=true in "addons-442328"
	I1006 18:43:45.591401    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.591772    5102 addons.go:69] Setting metrics-server=true in profile "addons-442328"
	I1006 18:43:45.591790    5102 addons.go:238] Setting addon metrics-server=true in "addons-442328"
	I1006 18:43:45.591808    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.592182    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.592638    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.595080    5102 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-442328"
	I1006 18:43:45.595148    5102 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-442328"
	I1006 18:43:45.595198    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.597619    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.598738    5102 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-442328"
	I1006 18:43:45.598769    5102 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-442328"
	I1006 18:43:45.598808    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.599270    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.599735    5102 addons.go:69] Setting registry=true in profile "addons-442328"
	I1006 18:43:45.599755    5102 addons.go:238] Setting addon registry=true in "addons-442328"
	I1006 18:43:45.599785    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.600182    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.607319    5102 addons.go:69] Setting cloud-spanner=true in profile "addons-442328"
	I1006 18:43:45.607337    5102 addons.go:69] Setting registry-creds=true in profile "addons-442328"
	I1006 18:43:45.607365    5102 addons.go:238] Setting addon registry-creds=true in "addons-442328"
	I1006 18:43:45.607368    5102 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-442328"
	I1006 18:43:45.607396    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.607413    5102 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-442328"
	I1006 18:43:45.607435    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.607880    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.607993    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.625577    5102 addons.go:69] Setting storage-provisioner=true in profile "addons-442328"
	I1006 18:43:45.638712    5102 addons.go:238] Setting addon storage-provisioner=true in "addons-442328"
	I1006 18:43:45.638814    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.639450    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.625621    5102 addons.go:69] Setting default-storageclass=true in profile "addons-442328"
	I1006 18:43:45.644299    5102 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-442328"
	I1006 18:43:45.646354    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.625628    5102 addons.go:69] Setting gcp-auth=true in profile "addons-442328"
	I1006 18:43:45.657421    5102 mustload.go:65] Loading cluster: addons-442328
	I1006 18:43:45.657643    5102 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:43:45.657929    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.625639    5102 addons.go:69] Setting ingress=true in profile "addons-442328"
	I1006 18:43:45.663266    5102 addons.go:238] Setting addon ingress=true in "addons-442328"
	I1006 18:43:45.663363    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.664323    5102 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1006 18:43:45.665684    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.667295    5102 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1006 18:43:45.667325    5102 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1006 18:43:45.667383    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:45.625735    5102 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-442328"
	I1006 18:43:45.679872    5102 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-442328"
	I1006 18:43:45.680213    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.625645    5102 addons.go:69] Setting ingress-dns=true in profile "addons-442328"
	I1006 18:43:45.680376    5102 addons.go:238] Setting addon ingress-dns=true in "addons-442328"
	I1006 18:43:45.680409    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.680794    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.625761    5102 addons.go:69] Setting volcano=true in profile "addons-442328"
	I1006 18:43:45.697338    5102 addons.go:238] Setting addon volcano=true in "addons-442328"
	I1006 18:43:45.697379    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.697838    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.625768    5102 addons.go:69] Setting volumesnapshots=true in profile "addons-442328"
	I1006 18:43:45.700401    5102 addons.go:238] Setting addon volumesnapshots=true in "addons-442328"
	I1006 18:43:45.700445    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.700906    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.607351    5102 addons.go:238] Setting addon cloud-spanner=true in "addons-442328"
	I1006 18:43:45.717185    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.717655    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.625822    5102 out.go:179] * Verifying Kubernetes components...
	I1006 18:43:45.725706    5102 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 18:43:45.776505    5102 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1006 18:43:45.784951    5102 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1006 18:43:45.785450    5102 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1006 18:43:45.785726    5102 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1006 18:43:45.785814    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1006 18:43:45.785906    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:45.830087    5102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1006 18:43:45.843329    5102 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1006 18:43:45.843353    5102 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1006 18:43:45.843421    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:45.847816    5102 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1006 18:43:45.848374    5102 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1006 18:43:45.848393    5102 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1006 18:43:45.848463    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:45.863745    5102 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1006 18:43:45.871277    5102 out.go:179]   - Using image docker.io/registry:3.0.0
	I1006 18:43:45.875875    5102 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1006 18:43:45.875902    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1006 18:43:45.875975    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:45.889559    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.892932    5102 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1006 18:43:45.893003    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1006 18:43:45.893090    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:45.905771    5102 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1006 18:43:45.910310    5102 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1006 18:43:45.910335    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1006 18:43:45.910406    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:45.911362    5102 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1006 18:43:45.914956    5102 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1006 18:43:45.918432    5102 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1006 18:43:45.919463    5102 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1006 18:43:45.920390    5102 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-442328"
	I1006 18:43:45.920429    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.920870    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.932600    5102 addons.go:238] Setting addon default-storageclass=true in "addons-442328"
	I1006 18:43:45.932641    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:45.933060    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:45.939919    5102 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1006 18:43:45.939946    5102 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1006 18:43:45.940017    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:45.940818    5102 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1006 18:43:45.940837    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1006 18:43:45.940902    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:45.956224    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:45.963918    5102 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1006 18:43:45.964139    5102 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	W1006 18:43:45.965331    5102 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1006 18:43:45.965554    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:45.969284    5102 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1006 18:43:45.976997    5102 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1006 18:43:45.977032    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1006 18:43:45.977110    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:45.978579    5102 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 18:43:45.981851    5102 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1006 18:43:45.986989    5102 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1006 18:43:45.987114    5102 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1006 18:43:45.987203    5102 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 18:43:45.987495    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 18:43:45.987607    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:46.029745    5102 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1006 18:43:46.035409    5102 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1006 18:43:46.036172    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:46.038711    5102 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1006 18:43:46.042205    5102 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1006 18:43:46.042234    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1006 18:43:46.042304    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:46.043797    5102 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1006 18:43:46.055879    5102 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1006 18:43:46.055921    5102 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1006 18:43:46.055993    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:46.093911    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:46.100684    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:46.103863    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:46.119498    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:46.148062    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:46.173201    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:46.195766    5102 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1006 18:43:46.199445    5102 out.go:179]   - Using image docker.io/busybox:stable
	I1006 18:43:46.203531    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:46.204635    5102 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1006 18:43:46.204653    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1006 18:43:46.204708    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:46.209785    5102 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 18:43:46.209806    5102 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 18:43:46.209870    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:46.228787    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:46.234246    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:46.235113    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	W1006 18:43:46.236087    5102 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1006 18:43:46.236131    5102 retry.go:31] will retry after 148.861145ms: ssh: handshake failed: EOF
	W1006 18:43:46.236411    5102 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1006 18:43:46.236429    5102 retry.go:31] will retry after 261.10572ms: ssh: handshake failed: EOF
	I1006 18:43:46.276141    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:46.285465    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	W1006 18:43:46.286473    5102 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1006 18:43:46.286493    5102 retry.go:31] will retry after 156.887178ms: ssh: handshake failed: EOF
	I1006 18:43:46.358051    5102 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1006 18:43:46.502629    5102 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1006 18:43:46.502668    5102 retry.go:31] will retry after 375.579891ms: ssh: handshake failed: EOF
	I1006 18:43:46.659450    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1006 18:43:46.679187    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1006 18:43:46.724568    5102 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1006 18:43:46.724639    5102 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1006 18:43:46.726584    5102 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1006 18:43:46.726637    5102 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1006 18:43:46.733146    5102 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1006 18:43:46.733216    5102 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1006 18:43:46.761747    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1006 18:43:46.762612    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1006 18:43:46.778084    5102 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1006 18:43:46.778156    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1006 18:43:46.862817    5102 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1006 18:43:46.862890    5102 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1006 18:43:46.877049    5102 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1006 18:43:46.877119    5102 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1006 18:43:46.912718    5102 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1006 18:43:46.912787    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1006 18:43:46.944858    5102 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1006 18:43:46.944943    5102 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1006 18:43:46.956437    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1006 18:43:46.990310    5102 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1006 18:43:46.990385    5102 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1006 18:43:46.993228    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1006 18:43:47.041050    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 18:43:47.096645    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 18:43:47.105553    5102 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1006 18:43:47.105573    5102 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1006 18:43:47.145475    5102 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1006 18:43:47.145500    5102 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1006 18:43:47.150482    5102 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1006 18:43:47.150510    5102 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1006 18:43:47.156553    5102 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1006 18:43:47.156577    5102 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1006 18:43:47.239441    5102 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1006 18:43:47.239464    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1006 18:43:47.249813    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 18:43:47.280593    5102 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.450403399s)
	I1006 18:43:47.280623    5102 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1006 18:43:47.282121    5102 node_ready.go:35] waiting up to 6m0s for node "addons-442328" to be "Ready" ...
	I1006 18:43:47.342227    5102 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1006 18:43:47.342253    5102 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1006 18:43:47.369128    5102 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1006 18:43:47.369153    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1006 18:43:47.376864    5102 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1006 18:43:47.376899    5102 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1006 18:43:47.447119    5102 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1006 18:43:47.447147    5102 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1006 18:43:47.523772    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1006 18:43:47.553042    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1006 18:43:47.594675    5102 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1006 18:43:47.594699    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1006 18:43:47.611603    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1006 18:43:47.651355    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1006 18:43:47.721574    5102 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1006 18:43:47.721599    5102 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1006 18:43:47.786283    5102 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-442328" context rescaled to 1 replicas
	I1006 18:43:47.870408    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.210864872s)
	I1006 18:43:47.906707    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1006 18:43:47.996799    5102 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1006 18:43:47.996866    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1006 18:43:48.222958    5102 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1006 18:43:48.223028    5102 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1006 18:43:48.259438    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.580181066s)
	I1006 18:43:48.412528    5102 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1006 18:43:48.412552    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1006 18:43:48.597772    5102 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1006 18:43:48.597797    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1006 18:43:48.612755    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.850072685s)
	I1006 18:43:48.612945    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.85113011s)
	I1006 18:43:48.771628    5102 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1006 18:43:48.771658    5102 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1006 18:43:49.039812    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1006 18:43:49.296273    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:43:50.172198    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.215677968s)
	I1006 18:43:50.735834    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.742529919s)
	I1006 18:43:50.736026    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.694947149s)
	W1006 18:43:50.736055    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:43:50.736071    5102 retry.go:31] will retry after 345.519101ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:43:50.736088    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.639419083s)
	I1006 18:43:50.736117    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.486278777s)
	W1006 18:43:50.816157    5102 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1006 18:43:51.082412    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1006 18:43:51.350766    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:43:52.121008    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.597206395s)
	I1006 18:43:52.121109    5102 addons.go:479] Verifying addon ingress=true in "addons-442328"
	I1006 18:43:52.121323    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.568250098s)
	I1006 18:43:52.121381    5102 addons.go:479] Verifying addon registry=true in "addons-442328"
	I1006 18:43:52.121723    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.47033587s)
	I1006 18:43:52.121847    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.510209141s)
	I1006 18:43:52.122446    5102 addons.go:479] Verifying addon metrics-server=true in "addons-442328"
	I1006 18:43:52.121873    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.215091427s)
	W1006 18:43:52.122492    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1006 18:43:52.122511    5102 retry.go:31] will retry after 329.20649ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1006 18:43:52.124693    5102 out.go:179] * Verifying ingress addon...
	I1006 18:43:52.126730    5102 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-442328 service yakd-dashboard -n yakd-dashboard
	
	I1006 18:43:52.126739    5102 out.go:179] * Verifying registry addon...
	I1006 18:43:52.129551    5102 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1006 18:43:52.130717    5102 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1006 18:43:52.158574    5102 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1006 18:43:52.158606    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:52.160154    5102 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1006 18:43:52.160182    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:52.452417    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1006 18:43:52.546928    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.507072008s)
	I1006 18:43:52.547043    5102 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-442328"
	I1006 18:43:52.547011    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.464562756s)
	W1006 18:43:52.547461    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:43:52.547577    5102 retry.go:31] will retry after 524.476926ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:43:52.550068    5102 out.go:179] * Verifying csi-hostpath-driver addon...
	I1006 18:43:52.554595    5102 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1006 18:43:52.573613    5102 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1006 18:43:52.573647    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:43:52.674205    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:52.674773    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:53.058039    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:43:53.073207    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 18:43:53.159463    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:53.159884    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:53.502844    5102 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1006 18:43:53.503034    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:53.528429    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:43:53.558304    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:43:53.633393    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:53.638555    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:53.654598    5102 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1006 18:43:53.668795    5102 addons.go:238] Setting addon gcp-auth=true in "addons-442328"
	I1006 18:43:53.668896    5102 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:43:53.669403    5102 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:43:53.693710    5102 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1006 18:43:53.693762    5102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:43:53.730526    5102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	W1006 18:43:53.785419    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:43:54.058971    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:43:54.132903    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:54.138765    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:54.557979    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:43:54.632811    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:54.638232    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:55.060157    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:43:55.132976    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:55.138610    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:55.157919    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.705448821s)
	I1006 18:43:55.157957    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.084724366s)
	I1006 18:43:55.158001    5102 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.46426913s)
	W1006 18:43:55.158136    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:43:55.158157    5102 retry.go:31] will retry after 626.362992ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:43:55.160989    5102 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1006 18:43:55.163858    5102 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1006 18:43:55.166808    5102 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1006 18:43:55.166836    5102 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1006 18:43:55.180220    5102 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1006 18:43:55.180244    5102 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1006 18:43:55.194495    5102 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1006 18:43:55.194520    5102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1006 18:43:55.207926    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1006 18:43:55.564440    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:43:55.642519    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:55.646664    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:55.702588    5102 addons.go:479] Verifying addon gcp-auth=true in "addons-442328"
	I1006 18:43:55.705698    5102 out.go:179] * Verifying gcp-auth addon...
	I1006 18:43:55.709239    5102 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1006 18:43:55.712125    5102 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1006 18:43:55.712189    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:43:55.784959    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1006 18:43:55.786009    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:43:56.058587    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:43:56.134246    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:56.139102    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:56.213095    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:43:56.559001    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1006 18:43:56.606665    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:43:56.606743    5102 retry.go:31] will retry after 832.446037ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:43:56.632606    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:56.639370    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:56.711889    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:43:57.057901    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:43:57.132851    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:57.138689    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:57.212349    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:43:57.439896    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 18:43:57.558327    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:43:57.634359    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:57.640073    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:57.712809    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:43:57.786506    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:43:58.059590    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:43:58.133444    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:58.140266    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:58.212469    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:43:58.247228    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:43:58.247256    5102 retry.go:31] will retry after 1.041225751s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:43:58.558097    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:43:58.633090    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:58.638838    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:58.712915    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:43:59.058924    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:43:59.133295    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:59.138962    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:59.213036    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:43:59.288937    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 18:43:59.558272    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:43:59.634190    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:43:59.640463    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:43:59.713209    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:43:59.792724    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:44:00.094355    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:00.142717    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:00.155815    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:00.213289    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:00.314746    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.025767769s)
	W1006 18:44:00.314862    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:44:00.315819    5102 retry.go:31] will retry after 2.820328663s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:44:00.559421    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:00.634838    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:00.659765    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:00.713210    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:01.058564    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:01.132409    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:01.139967    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:01.212941    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:01.557689    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:01.632750    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:01.639301    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:01.713303    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:02.058446    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:02.133661    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:02.139198    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:02.212893    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:44:02.285837    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:44:02.558007    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:02.633133    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:02.638653    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:02.712404    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:03.058274    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:03.133888    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:03.136968    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 18:44:03.140127    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:03.218525    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:03.558228    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:03.633157    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:03.638952    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:03.712753    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:44:03.961811    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:44:03.961848    5102 retry.go:31] will retry after 1.956913032s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:44:04.058602    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:04.133717    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:04.139302    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:04.211962    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:04.557683    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:04.632676    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:04.639430    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:04.712287    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:44:04.785663    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:44:05.058645    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:05.132772    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:05.139335    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:05.212949    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:05.557865    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:05.632668    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:05.639615    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:05.712442    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:05.918968    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 18:44:06.058439    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:06.133299    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:06.138822    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:06.212814    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:06.558474    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:06.633373    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:06.639302    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:06.712867    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:44:06.750581    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:44:06.750658    5102 retry.go:31] will retry after 4.301628283s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 18:44:06.785775    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:44:07.057945    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:07.133163    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:07.138818    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:07.212856    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:07.557501    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:07.633187    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:07.638878    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:07.712654    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:08.058232    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:08.133196    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:08.139019    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:08.213402    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:08.559030    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:08.633130    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:08.638651    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:08.712530    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:44:08.790906    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:44:09.058260    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:09.133020    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:09.139130    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:09.212312    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:09.557890    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:09.632913    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:09.638569    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:09.712651    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:10.057908    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:10.133529    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:10.139035    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:10.213002    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:10.558239    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:10.637024    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:10.641948    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:10.712960    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:11.053226    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 18:44:11.063237    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:11.133428    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:11.139192    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:11.212951    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:44:11.285957    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:44:11.558142    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:11.632931    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:11.639300    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:11.713036    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:44:11.867012    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:44:11.867044    5102 retry.go:31] will retry after 4.682245078s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:44:12.058169    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:12.133432    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:12.139622    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:12.212422    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:12.558069    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:12.633198    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:12.638660    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:12.712666    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:13.057956    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:13.132802    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:13.139554    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:13.212207    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:13.557849    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:13.632936    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:13.639452    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:13.712335    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:44:13.785887    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:44:14.058297    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:14.133154    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:14.138900    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:14.213286    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:14.558202    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:14.633222    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:14.638891    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:14.712696    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:15.058866    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:15.133232    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:15.139012    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:15.213083    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:15.558045    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:15.633240    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:15.639013    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:15.712968    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:44:15.786414    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:44:16.058136    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:16.132927    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:16.138497    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:16.212271    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:16.550423    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 18:44:16.557821    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:16.633466    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:16.639113    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:16.713021    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:17.058320    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:17.133687    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:17.139389    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:17.212773    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:44:17.381288    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:44:17.381320    5102 retry.go:31] will retry after 9.740361518s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:44:17.558075    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:17.633000    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:17.638617    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:17.712543    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:18.058308    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:18.133652    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:18.139571    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:18.212455    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:44:18.285274    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:44:18.558548    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:18.633653    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:18.638900    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:18.712787    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:19.058323    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:19.132860    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:19.138549    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:19.212624    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:19.557404    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:19.633411    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:19.638924    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:19.712784    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:20.057766    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:20.132730    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:20.139507    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:20.212190    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:44:20.285887    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:44:20.558176    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:20.633170    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:20.638932    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:20.712947    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:21.058363    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:21.133107    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:21.139351    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:21.213121    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:21.558229    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:21.633233    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:21.639073    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:21.712923    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:22.057768    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:22.132785    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:22.139044    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:22.213162    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:22.559301    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:22.633672    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:22.639344    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:22.712100    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:44:22.785851    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:44:23.057949    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:23.132715    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:23.139372    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:23.213314    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:23.558378    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:23.633655    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:23.639150    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:23.713113    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:24.058411    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:24.133683    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:24.139250    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:24.211996    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:24.558047    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:24.633185    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:24.638866    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:24.712841    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 18:44:24.788842    5102 node_ready.go:57] node "addons-442328" has "Ready":"False" status (will retry)
	I1006 18:44:25.058082    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:25.133005    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:25.138846    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:25.213404    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:25.558023    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:25.633571    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:25.639028    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:25.712907    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:26.058710    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:26.132637    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:26.139313    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:26.212345    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:26.557840    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:26.632848    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:26.639370    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:26.750156    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:26.805706    5102 node_ready.go:49] node "addons-442328" is "Ready"
	I1006 18:44:26.805735    5102 node_ready.go:38] duration metric: took 39.52358282s for node "addons-442328" to be "Ready" ...
	I1006 18:44:26.805749    5102 api_server.go:52] waiting for apiserver process to appear ...
	I1006 18:44:26.805825    5102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 18:44:26.835913    5102 api_server.go:72] duration metric: took 41.245582728s to wait for apiserver process to appear ...
	I1006 18:44:26.835939    5102 api_server.go:88] waiting for apiserver healthz status ...
	I1006 18:44:26.835958    5102 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1006 18:44:26.854686    5102 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1006 18:44:26.856831    5102 api_server.go:141] control plane version: v1.34.1
	I1006 18:44:26.856862    5102 api_server.go:131] duration metric: took 20.915555ms to wait for apiserver health ...
	I1006 18:44:26.856871    5102 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 18:44:26.870158    5102 system_pods.go:59] 19 kube-system pods found
	I1006 18:44:26.870201    5102 system_pods.go:61] "coredns-66bc5c9577-bx5cf" [779ed2d3-fc88-4853-ac93-7a56e62a0190] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 18:44:26.870209    5102 system_pods.go:61] "csi-hostpath-attacher-0" [d762fc6a-25ef-424f-b865-914e712ed260] Pending
	I1006 18:44:26.870216    5102 system_pods.go:61] "csi-hostpath-resizer-0" [f5c6a57b-496a-40d3-a7da-cae95047a0db] Pending
	I1006 18:44:26.870222    5102 system_pods.go:61] "csi-hostpathplugin-g7kvd" [db381122-217d-4859-9d86-d7d29a692af5] Pending
	I1006 18:44:26.870226    5102 system_pods.go:61] "etcd-addons-442328" [8562fac7-6c63-42c3-98bb-4c599ba17505] Running
	I1006 18:44:26.870230    5102 system_pods.go:61] "kindnet-g2tkh" [1e0c713a-0110-4300-9fe7-8834abac0a34] Running
	I1006 18:44:26.870235    5102 system_pods.go:61] "kube-apiserver-addons-442328" [5b6af8a1-10f7-478e-b97d-27e19e5f2469] Running
	I1006 18:44:26.870239    5102 system_pods.go:61] "kube-controller-manager-addons-442328" [0d008802-4fa7-4ef6-81d3-f391d8b8488f] Running
	I1006 18:44:26.870244    5102 system_pods.go:61] "kube-ingress-dns-minikube" [737e5bad-1cf7-448f-acf2-93e38e152f16] Pending
	I1006 18:44:26.870251    5102 system_pods.go:61] "kube-proxy-n686b" [68644dfa-f7a5-42aa-938f-b928268e97f0] Running
	I1006 18:44:26.870256    5102 system_pods.go:61] "kube-scheduler-addons-442328" [fafbbc55-3ea9-4805-8bc2-0a365282ee86] Running
	I1006 18:44:26.870267    5102 system_pods.go:61] "metrics-server-85b7d694d7-swsbd" [d28f1e31-8b2f-42c5-b7a2-a1662d0c5412] Pending
	I1006 18:44:26.870271    5102 system_pods.go:61] "nvidia-device-plugin-daemonset-2ptdw" [9a959e36-17ef-49b2-8c50-36b639bbbbf7] Pending
	I1006 18:44:26.870275    5102 system_pods.go:61] "registry-66898fdd98-k4cb6" [0b43b208-864a-4999-a910-ca608e917e81] Pending
	I1006 18:44:26.870288    5102 system_pods.go:61] "registry-creds-764b6fb674-pgnhk" [7ae00ec6-941d-4df4-a11f-08481b31d714] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1006 18:44:26.870293    5102 system_pods.go:61] "registry-proxy-zqdsp" [b9743061-9fde-4f5c-aa4f-c77616b5dfa7] Pending
	I1006 18:44:26.870303    5102 system_pods.go:61] "snapshot-controller-7d9fbc56b8-glwdc" [1ac758b5-014d-4f6e-a68a-a653a4c90550] Pending
	I1006 18:44:26.870308    5102 system_pods.go:61] "snapshot-controller-7d9fbc56b8-jg8hf" [b5a9bdde-a4d4-442a-9cee-f8cbc46520d2] Pending
	I1006 18:44:26.870312    5102 system_pods.go:61] "storage-provisioner" [63c4de3f-fd5a-4fb8-ba9f-4e238f7541f3] Pending
	I1006 18:44:26.870318    5102 system_pods.go:74] duration metric: took 13.441294ms to wait for pod list to return data ...
	I1006 18:44:26.870329    5102 default_sa.go:34] waiting for default service account to be created ...
	I1006 18:44:26.883957    5102 default_sa.go:45] found service account: "default"
	I1006 18:44:26.883984    5102 default_sa.go:55] duration metric: took 13.648168ms for default service account to be created ...
	I1006 18:44:26.883995    5102 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 18:44:26.908285    5102 system_pods.go:86] 19 kube-system pods found
	I1006 18:44:26.908318    5102 system_pods.go:89] "coredns-66bc5c9577-bx5cf" [779ed2d3-fc88-4853-ac93-7a56e62a0190] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 18:44:26.908326    5102 system_pods.go:89] "csi-hostpath-attacher-0" [d762fc6a-25ef-424f-b865-914e712ed260] Pending
	I1006 18:44:26.908331    5102 system_pods.go:89] "csi-hostpath-resizer-0" [f5c6a57b-496a-40d3-a7da-cae95047a0db] Pending
	I1006 18:44:26.908335    5102 system_pods.go:89] "csi-hostpathplugin-g7kvd" [db381122-217d-4859-9d86-d7d29a692af5] Pending
	I1006 18:44:26.908339    5102 system_pods.go:89] "etcd-addons-442328" [8562fac7-6c63-42c3-98bb-4c599ba17505] Running
	I1006 18:44:26.908344    5102 system_pods.go:89] "kindnet-g2tkh" [1e0c713a-0110-4300-9fe7-8834abac0a34] Running
	I1006 18:44:26.908348    5102 system_pods.go:89] "kube-apiserver-addons-442328" [5b6af8a1-10f7-478e-b97d-27e19e5f2469] Running
	I1006 18:44:26.908357    5102 system_pods.go:89] "kube-controller-manager-addons-442328" [0d008802-4fa7-4ef6-81d3-f391d8b8488f] Running
	I1006 18:44:26.908364    5102 system_pods.go:89] "kube-ingress-dns-minikube" [737e5bad-1cf7-448f-acf2-93e38e152f16] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1006 18:44:26.908379    5102 system_pods.go:89] "kube-proxy-n686b" [68644dfa-f7a5-42aa-938f-b928268e97f0] Running
	I1006 18:44:26.908383    5102 system_pods.go:89] "kube-scheduler-addons-442328" [fafbbc55-3ea9-4805-8bc2-0a365282ee86] Running
	I1006 18:44:26.908390    5102 system_pods.go:89] "metrics-server-85b7d694d7-swsbd" [d28f1e31-8b2f-42c5-b7a2-a1662d0c5412] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1006 18:44:26.908398    5102 system_pods.go:89] "nvidia-device-plugin-daemonset-2ptdw" [9a959e36-17ef-49b2-8c50-36b639bbbbf7] Pending
	I1006 18:44:26.908403    5102 system_pods.go:89] "registry-66898fdd98-k4cb6" [0b43b208-864a-4999-a910-ca608e917e81] Pending
	I1006 18:44:26.908409    5102 system_pods.go:89] "registry-creds-764b6fb674-pgnhk" [7ae00ec6-941d-4df4-a11f-08481b31d714] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1006 18:44:26.908425    5102 system_pods.go:89] "registry-proxy-zqdsp" [b9743061-9fde-4f5c-aa4f-c77616b5dfa7] Pending
	I1006 18:44:26.908432    5102 system_pods.go:89] "snapshot-controller-7d9fbc56b8-glwdc" [1ac758b5-014d-4f6e-a68a-a653a4c90550] Pending
	I1006 18:44:26.908437    5102 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jg8hf" [b5a9bdde-a4d4-442a-9cee-f8cbc46520d2] Pending
	I1006 18:44:26.908443    5102 system_pods.go:89] "storage-provisioner" [63c4de3f-fd5a-4fb8-ba9f-4e238f7541f3] Pending
	I1006 18:44:26.908457    5102 retry.go:31] will retry after 307.182962ms: missing components: kube-dns
	I1006 18:44:27.122670    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 18:44:27.122964    5102 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1006 18:44:27.122982    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:27.140681    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:27.145136    5102 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1006 18:44:27.145161    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:27.219499    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:27.225464    5102 system_pods.go:86] 19 kube-system pods found
	I1006 18:44:27.225505    5102 system_pods.go:89] "coredns-66bc5c9577-bx5cf" [779ed2d3-fc88-4853-ac93-7a56e62a0190] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 18:44:27.225518    5102 system_pods.go:89] "csi-hostpath-attacher-0" [d762fc6a-25ef-424f-b865-914e712ed260] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1006 18:44:27.225525    5102 system_pods.go:89] "csi-hostpath-resizer-0" [f5c6a57b-496a-40d3-a7da-cae95047a0db] Pending
	I1006 18:44:27.225539    5102 system_pods.go:89] "csi-hostpathplugin-g7kvd" [db381122-217d-4859-9d86-d7d29a692af5] Pending
	I1006 18:44:27.225543    5102 system_pods.go:89] "etcd-addons-442328" [8562fac7-6c63-42c3-98bb-4c599ba17505] Running
	I1006 18:44:27.225553    5102 system_pods.go:89] "kindnet-g2tkh" [1e0c713a-0110-4300-9fe7-8834abac0a34] Running
	I1006 18:44:27.225560    5102 system_pods.go:89] "kube-apiserver-addons-442328" [5b6af8a1-10f7-478e-b97d-27e19e5f2469] Running
	I1006 18:44:27.225565    5102 system_pods.go:89] "kube-controller-manager-addons-442328" [0d008802-4fa7-4ef6-81d3-f391d8b8488f] Running
	I1006 18:44:27.225575    5102 system_pods.go:89] "kube-ingress-dns-minikube" [737e5bad-1cf7-448f-acf2-93e38e152f16] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1006 18:44:27.225584    5102 system_pods.go:89] "kube-proxy-n686b" [68644dfa-f7a5-42aa-938f-b928268e97f0] Running
	I1006 18:44:27.225594    5102 system_pods.go:89] "kube-scheduler-addons-442328" [fafbbc55-3ea9-4805-8bc2-0a365282ee86] Running
	I1006 18:44:27.225601    5102 system_pods.go:89] "metrics-server-85b7d694d7-swsbd" [d28f1e31-8b2f-42c5-b7a2-a1662d0c5412] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1006 18:44:27.225613    5102 system_pods.go:89] "nvidia-device-plugin-daemonset-2ptdw" [9a959e36-17ef-49b2-8c50-36b639bbbbf7] Pending
	I1006 18:44:27.225624    5102 system_pods.go:89] "registry-66898fdd98-k4cb6" [0b43b208-864a-4999-a910-ca608e917e81] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1006 18:44:27.225634    5102 system_pods.go:89] "registry-creds-764b6fb674-pgnhk" [7ae00ec6-941d-4df4-a11f-08481b31d714] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1006 18:44:27.225639    5102 system_pods.go:89] "registry-proxy-zqdsp" [b9743061-9fde-4f5c-aa4f-c77616b5dfa7] Pending
	I1006 18:44:27.225650    5102 system_pods.go:89] "snapshot-controller-7d9fbc56b8-glwdc" [1ac758b5-014d-4f6e-a68a-a653a4c90550] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 18:44:27.225661    5102 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jg8hf" [b5a9bdde-a4d4-442a-9cee-f8cbc46520d2] Pending
	I1006 18:44:27.225665    5102 system_pods.go:89] "storage-provisioner" [63c4de3f-fd5a-4fb8-ba9f-4e238f7541f3] Pending
	I1006 18:44:27.225679    5102 retry.go:31] will retry after 390.029892ms: missing components: kube-dns
	I1006 18:44:27.566852    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:27.629912    5102 system_pods.go:86] 19 kube-system pods found
	I1006 18:44:27.629951    5102 system_pods.go:89] "coredns-66bc5c9577-bx5cf" [779ed2d3-fc88-4853-ac93-7a56e62a0190] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 18:44:27.629960    5102 system_pods.go:89] "csi-hostpath-attacher-0" [d762fc6a-25ef-424f-b865-914e712ed260] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1006 18:44:27.629967    5102 system_pods.go:89] "csi-hostpath-resizer-0" [f5c6a57b-496a-40d3-a7da-cae95047a0db] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1006 18:44:27.629974    5102 system_pods.go:89] "csi-hostpathplugin-g7kvd" [db381122-217d-4859-9d86-d7d29a692af5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1006 18:44:27.629982    5102 system_pods.go:89] "etcd-addons-442328" [8562fac7-6c63-42c3-98bb-4c599ba17505] Running
	I1006 18:44:27.629998    5102 system_pods.go:89] "kindnet-g2tkh" [1e0c713a-0110-4300-9fe7-8834abac0a34] Running
	I1006 18:44:27.630003    5102 system_pods.go:89] "kube-apiserver-addons-442328" [5b6af8a1-10f7-478e-b97d-27e19e5f2469] Running
	I1006 18:44:27.630008    5102 system_pods.go:89] "kube-controller-manager-addons-442328" [0d008802-4fa7-4ef6-81d3-f391d8b8488f] Running
	I1006 18:44:27.630014    5102 system_pods.go:89] "kube-ingress-dns-minikube" [737e5bad-1cf7-448f-acf2-93e38e152f16] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1006 18:44:27.630018    5102 system_pods.go:89] "kube-proxy-n686b" [68644dfa-f7a5-42aa-938f-b928268e97f0] Running
	I1006 18:44:27.630029    5102 system_pods.go:89] "kube-scheduler-addons-442328" [fafbbc55-3ea9-4805-8bc2-0a365282ee86] Running
	I1006 18:44:27.630036    5102 system_pods.go:89] "metrics-server-85b7d694d7-swsbd" [d28f1e31-8b2f-42c5-b7a2-a1662d0c5412] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1006 18:44:27.630049    5102 system_pods.go:89] "nvidia-device-plugin-daemonset-2ptdw" [9a959e36-17ef-49b2-8c50-36b639bbbbf7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1006 18:44:27.630055    5102 system_pods.go:89] "registry-66898fdd98-k4cb6" [0b43b208-864a-4999-a910-ca608e917e81] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1006 18:44:27.630066    5102 system_pods.go:89] "registry-creds-764b6fb674-pgnhk" [7ae00ec6-941d-4df4-a11f-08481b31d714] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1006 18:44:27.630072    5102 system_pods.go:89] "registry-proxy-zqdsp" [b9743061-9fde-4f5c-aa4f-c77616b5dfa7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1006 18:44:27.630078    5102 system_pods.go:89] "snapshot-controller-7d9fbc56b8-glwdc" [1ac758b5-014d-4f6e-a68a-a653a4c90550] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 18:44:27.630086    5102 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jg8hf" [b5a9bdde-a4d4-442a-9cee-f8cbc46520d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 18:44:27.630091    5102 system_pods.go:89] "storage-provisioner" [63c4de3f-fd5a-4fb8-ba9f-4e238f7541f3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 18:44:27.630107    5102 retry.go:31] will retry after 361.124555ms: missing components: kube-dns
	I1006 18:44:27.636744    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:27.643271    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:27.743423    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:28.002465    5102 system_pods.go:86] 19 kube-system pods found
	I1006 18:44:28.002503    5102 system_pods.go:89] "coredns-66bc5c9577-bx5cf" [779ed2d3-fc88-4853-ac93-7a56e62a0190] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 18:44:28.002512    5102 system_pods.go:89] "csi-hostpath-attacher-0" [d762fc6a-25ef-424f-b865-914e712ed260] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1006 18:44:28.002520    5102 system_pods.go:89] "csi-hostpath-resizer-0" [f5c6a57b-496a-40d3-a7da-cae95047a0db] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1006 18:44:28.002528    5102 system_pods.go:89] "csi-hostpathplugin-g7kvd" [db381122-217d-4859-9d86-d7d29a692af5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1006 18:44:28.002532    5102 system_pods.go:89] "etcd-addons-442328" [8562fac7-6c63-42c3-98bb-4c599ba17505] Running
	I1006 18:44:28.002540    5102 system_pods.go:89] "kindnet-g2tkh" [1e0c713a-0110-4300-9fe7-8834abac0a34] Running
	I1006 18:44:28.002549    5102 system_pods.go:89] "kube-apiserver-addons-442328" [5b6af8a1-10f7-478e-b97d-27e19e5f2469] Running
	I1006 18:44:28.002553    5102 system_pods.go:89] "kube-controller-manager-addons-442328" [0d008802-4fa7-4ef6-81d3-f391d8b8488f] Running
	I1006 18:44:28.002564    5102 system_pods.go:89] "kube-ingress-dns-minikube" [737e5bad-1cf7-448f-acf2-93e38e152f16] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1006 18:44:28.002568    5102 system_pods.go:89] "kube-proxy-n686b" [68644dfa-f7a5-42aa-938f-b928268e97f0] Running
	I1006 18:44:28.002573    5102 system_pods.go:89] "kube-scheduler-addons-442328" [fafbbc55-3ea9-4805-8bc2-0a365282ee86] Running
	I1006 18:44:28.002586    5102 system_pods.go:89] "metrics-server-85b7d694d7-swsbd" [d28f1e31-8b2f-42c5-b7a2-a1662d0c5412] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1006 18:44:28.002593    5102 system_pods.go:89] "nvidia-device-plugin-daemonset-2ptdw" [9a959e36-17ef-49b2-8c50-36b639bbbbf7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1006 18:44:28.002600    5102 system_pods.go:89] "registry-66898fdd98-k4cb6" [0b43b208-864a-4999-a910-ca608e917e81] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1006 18:44:28.002610    5102 system_pods.go:89] "registry-creds-764b6fb674-pgnhk" [7ae00ec6-941d-4df4-a11f-08481b31d714] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1006 18:44:28.002616    5102 system_pods.go:89] "registry-proxy-zqdsp" [b9743061-9fde-4f5c-aa4f-c77616b5dfa7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1006 18:44:28.002626    5102 system_pods.go:89] "snapshot-controller-7d9fbc56b8-glwdc" [1ac758b5-014d-4f6e-a68a-a653a4c90550] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 18:44:28.002645    5102 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jg8hf" [b5a9bdde-a4d4-442a-9cee-f8cbc46520d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 18:44:28.002671    5102 system_pods.go:89] "storage-provisioner" [63c4de3f-fd5a-4fb8-ba9f-4e238f7541f3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 18:44:28.002686    5102 retry.go:31] will retry after 463.661369ms: missing components: kube-dns
	I1006 18:44:28.097200    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:28.197384    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:28.197637    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:28.214533    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:28.472607    5102 system_pods.go:86] 19 kube-system pods found
	I1006 18:44:28.472693    5102 system_pods.go:89] "coredns-66bc5c9577-bx5cf" [779ed2d3-fc88-4853-ac93-7a56e62a0190] Running
	I1006 18:44:28.472720    5102 system_pods.go:89] "csi-hostpath-attacher-0" [d762fc6a-25ef-424f-b865-914e712ed260] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1006 18:44:28.472740    5102 system_pods.go:89] "csi-hostpath-resizer-0" [f5c6a57b-496a-40d3-a7da-cae95047a0db] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1006 18:44:28.472772    5102 system_pods.go:89] "csi-hostpathplugin-g7kvd" [db381122-217d-4859-9d86-d7d29a692af5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1006 18:44:28.472792    5102 system_pods.go:89] "etcd-addons-442328" [8562fac7-6c63-42c3-98bb-4c599ba17505] Running
	I1006 18:44:28.472813    5102 system_pods.go:89] "kindnet-g2tkh" [1e0c713a-0110-4300-9fe7-8834abac0a34] Running
	I1006 18:44:28.472845    5102 system_pods.go:89] "kube-apiserver-addons-442328" [5b6af8a1-10f7-478e-b97d-27e19e5f2469] Running
	I1006 18:44:28.472864    5102 system_pods.go:89] "kube-controller-manager-addons-442328" [0d008802-4fa7-4ef6-81d3-f391d8b8488f] Running
	I1006 18:44:28.472886    5102 system_pods.go:89] "kube-ingress-dns-minikube" [737e5bad-1cf7-448f-acf2-93e38e152f16] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1006 18:44:28.472903    5102 system_pods.go:89] "kube-proxy-n686b" [68644dfa-f7a5-42aa-938f-b928268e97f0] Running
	I1006 18:44:28.472923    5102 system_pods.go:89] "kube-scheduler-addons-442328" [fafbbc55-3ea9-4805-8bc2-0a365282ee86] Running
	I1006 18:44:28.472941    5102 system_pods.go:89] "metrics-server-85b7d694d7-swsbd" [d28f1e31-8b2f-42c5-b7a2-a1662d0c5412] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1006 18:44:28.472972    5102 system_pods.go:89] "nvidia-device-plugin-daemonset-2ptdw" [9a959e36-17ef-49b2-8c50-36b639bbbbf7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1006 18:44:28.472990    5102 system_pods.go:89] "registry-66898fdd98-k4cb6" [0b43b208-864a-4999-a910-ca608e917e81] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1006 18:44:28.473020    5102 system_pods.go:89] "registry-creds-764b6fb674-pgnhk" [7ae00ec6-941d-4df4-a11f-08481b31d714] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1006 18:44:28.473041    5102 system_pods.go:89] "registry-proxy-zqdsp" [b9743061-9fde-4f5c-aa4f-c77616b5dfa7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1006 18:44:28.473063    5102 system_pods.go:89] "snapshot-controller-7d9fbc56b8-glwdc" [1ac758b5-014d-4f6e-a68a-a653a4c90550] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 18:44:28.473085    5102 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jg8hf" [b5a9bdde-a4d4-442a-9cee-f8cbc46520d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 18:44:28.473103    5102 system_pods.go:89] "storage-provisioner" [63c4de3f-fd5a-4fb8-ba9f-4e238f7541f3] Running
	I1006 18:44:28.473126    5102 system_pods.go:126] duration metric: took 1.589124541s to wait for k8s-apps to be running ...
	I1006 18:44:28.473145    5102 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 18:44:28.473216    5102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 18:44:28.571350    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:28.633630    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:28.639585    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:28.712586    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:28.878770    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.756061368s)
	W1006 18:44:28.878818    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:44:28.879003    5102 retry.go:31] will retry after 17.384698944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:44:28.878899    5102 system_svc.go:56] duration metric: took 405.738665ms WaitForService to wait for kubelet
	I1006 18:44:28.879055    5102 kubeadm.go:586] duration metric: took 43.288711831s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 18:44:28.879081    5102 node_conditions.go:102] verifying NodePressure condition ...
	I1006 18:44:28.882100    5102 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 18:44:28.882145    5102 node_conditions.go:123] node cpu capacity is 2
	I1006 18:44:28.882159    5102 node_conditions.go:105] duration metric: took 3.071991ms to run NodePressure ...
	I1006 18:44:28.882212    5102 start.go:241] waiting for startup goroutines ...
	I1006 18:44:29.058736    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:29.132659    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:29.139417    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:29.212269    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:29.557815    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:29.636964    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:29.639567    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:29.736882    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:30.068891    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:30.133410    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:30.139330    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:30.225530    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:30.558236    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:30.658913    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:30.659124    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:30.712702    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:31.058730    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:31.133737    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:31.139555    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:31.212640    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:31.558522    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:31.633741    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:31.639575    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:31.713212    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:32.058481    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:32.133750    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:32.139895    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:32.213793    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:32.558314    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:32.633898    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:32.639320    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:32.712501    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:33.058324    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:33.134098    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:33.139486    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:33.212933    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:33.558021    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:33.633575    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:33.639527    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:33.713394    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:34.057632    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:34.133638    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:34.139124    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:34.212720    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:34.558370    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:34.659583    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:34.659817    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:34.713047    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:35.058418    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:35.133549    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:35.139613    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:35.212538    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:35.558083    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:35.633126    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:35.638939    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:35.712837    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:36.059123    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:36.133189    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:36.139027    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:36.213783    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:36.557801    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:36.633248    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:36.638426    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:36.712556    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:37.057563    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:37.133400    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:37.139372    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:37.212699    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:37.558995    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:37.633387    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:37.639459    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:37.712809    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:38.058353    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:38.133984    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:38.138888    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:38.213147    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:38.558459    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:38.633172    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:38.639415    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:38.712180    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:39.058785    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:39.133528    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:39.139838    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:39.213488    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:39.558668    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:39.633918    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:39.639987    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:39.713465    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:40.061787    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:40.134847    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:40.139586    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:40.213256    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:40.559503    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:40.634378    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:40.638998    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:40.713252    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:41.058995    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:41.133560    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:41.139876    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:41.213230    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:41.565542    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:41.632909    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:41.640099    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:41.713444    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:42.059134    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:42.134349    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:42.140649    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:42.217903    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:42.558297    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:42.633904    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:42.638814    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:42.737018    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:43.059062    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:43.137816    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:43.139619    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:43.212862    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:43.559209    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:43.632975    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:43.639889    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:43.713072    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:44.058646    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:44.133660    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:44.139346    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:44.212939    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:44.561745    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:44.633226    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:44.639743    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:44.714019    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:45.063331    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:45.134670    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:45.140226    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:45.218338    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:45.558316    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:45.633566    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:45.639768    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:45.713108    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:46.058817    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:46.133217    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:46.139524    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:46.212974    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:46.264311    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 18:44:46.558686    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:46.633089    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:46.639074    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:46.713604    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:47.063850    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:47.134113    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:47.140974    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:47.213755    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:47.462752    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.198346753s)
	W1006 18:44:47.462840    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:44:47.462873    5102 retry.go:31] will retry after 21.39241557s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:44:47.559460    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:47.633929    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:47.640182    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:47.712914    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:48.058864    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:48.133044    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:48.139512    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:48.212809    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:48.559165    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:48.633583    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:48.640444    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:48.713036    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:49.059103    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:49.133137    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:49.138892    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:49.213496    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:49.558070    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:49.633048    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:49.638831    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:49.712523    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:50.058300    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:50.133983    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:50.139068    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:50.212919    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:50.558044    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:50.633098    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:50.638926    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:50.713175    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:51.059395    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:51.133905    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:51.139276    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:51.212911    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:51.559301    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:51.633882    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:51.639295    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:51.712742    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:52.059786    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:52.134339    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:52.139845    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:52.213255    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:52.566282    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:52.633931    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:52.640043    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:52.713275    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:53.058120    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:53.133220    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:53.139143    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:53.213286    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:53.558342    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:53.632966    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:53.639332    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:53.713142    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:54.059945    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:54.132957    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:54.139522    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:54.214042    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:54.559354    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:54.659560    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:54.659683    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:54.712119    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:55.059400    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:55.133657    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:55.139671    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:55.212727    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:55.559686    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:55.632674    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:55.639348    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:55.713377    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:56.058301    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:56.133482    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:56.139613    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:56.212616    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:56.559483    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:56.634733    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:56.640765    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:56.712723    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:57.059021    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:57.133248    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:57.140167    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:57.212898    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:57.559547    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:57.634661    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:57.639478    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:57.735113    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:58.058671    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:58.133518    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:58.139370    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:58.212409    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:58.576418    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:58.636063    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:58.639850    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:58.735289    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:59.059204    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:59.160617    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:59.160813    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:59.212779    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:44:59.559591    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:44:59.633587    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:44:59.639287    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:44:59.712942    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:00.061275    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:00.137417    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:00.143340    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:00.213776    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:00.575887    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:00.646228    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:00.647574    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:00.714999    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:01.059217    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:01.133689    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:01.140228    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:01.213191    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:01.558590    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:01.633371    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:01.641709    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:01.713401    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:02.058427    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:02.134171    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:02.139069    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:02.212386    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:02.558788    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:02.633345    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:02.639026    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:02.713233    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:03.059425    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:03.134000    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:03.139146    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:03.213544    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:03.557654    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:03.633321    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:03.638786    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:03.712677    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:04.058485    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:04.133851    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:04.138820    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:04.213654    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:04.559356    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:04.634028    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:04.639040    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:04.713419    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:05.059336    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:05.134362    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:05.140529    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:05.212868    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:05.563010    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:05.633644    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:05.639924    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:05.713163    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:06.062662    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:06.142473    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:06.145443    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:06.214715    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:06.579168    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:06.750663    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:06.750939    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:06.751638    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:07.057895    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:07.133711    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:07.139436    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:07.212434    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:07.558841    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:07.632980    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:07.639030    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:07.713338    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:08.058426    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:08.133869    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:08.138999    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:08.213268    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:08.558352    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:08.633638    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:08.639616    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:08.712767    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:08.856179    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 18:45:09.058593    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:09.133961    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:09.138896    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:09.213924    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:09.559629    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:09.634013    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:09.638683    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:09.712538    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:10.048681    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.1924654s)
	W1006 18:45:10.048718    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:45:10.048736    5102 retry.go:31] will retry after 34.297265778s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 18:45:10.058896    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:10.133216    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:10.139236    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:10.212576    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:10.558997    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:10.633297    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:10.639228    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:10.712441    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:11.058390    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:11.134233    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:11.139431    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:11.212966    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:11.559036    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:11.632974    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:11.638978    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:11.712958    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:12.058422    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:12.159507    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:12.159679    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:12.212719    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:12.558224    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:12.634913    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:12.639895    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:12.712632    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:13.058324    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:13.133416    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:13.139483    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:13.212311    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:13.558712    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:13.634505    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:13.639691    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:13.713301    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:14.058337    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:14.133847    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:14.139068    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:14.212664    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:14.558766    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:14.632669    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:14.650282    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:14.712643    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:15.064102    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:15.133676    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:15.139910    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 18:45:15.213250    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:15.558148    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:15.642558    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:15.652634    5102 kapi.go:107] duration metric: took 1m23.521914892s to wait for kubernetes.io/minikube-addons=registry ...
	I1006 18:45:15.713607    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:16.058733    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:16.133098    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:16.213336    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:16.563676    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:16.633227    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:16.733236    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:17.058521    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:17.134551    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:17.212627    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:17.558206    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:17.633876    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:17.713130    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:18.059076    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:18.133513    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:18.212892    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:18.558906    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:18.633833    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:18.713142    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:19.061771    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:19.133962    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:19.213337    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:19.561088    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:19.636493    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:19.713010    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:20.059568    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:20.134293    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:20.218793    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:20.559625    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:20.651263    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:20.714071    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:21.058614    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:21.133456    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:21.212210    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:21.558875    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:21.633895    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:21.713600    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:22.059004    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:22.133895    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:22.213590    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:22.558736    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:22.633636    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:22.712740    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:23.059064    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:23.133866    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:23.212656    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 18:45:23.559303    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:23.633807    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:23.728488    5102 kapi.go:107] duration metric: took 1m28.019250269s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1006 18:45:23.731941    5102 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-442328 cluster.
	I1006 18:45:23.735014    5102 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1006 18:45:23.738153    5102 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1006 18:45:24.058366    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:24.158922    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:24.559206    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:24.637340    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:25.059396    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:25.134786    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:25.559177    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:25.633300    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:26.059641    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:26.132893    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:26.558701    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:26.633398    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:27.059399    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:27.133633    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:27.558062    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:27.633454    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:28.058543    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:28.134201    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:28.557835    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:28.638614    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:29.059143    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:29.133582    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:29.558844    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:29.632825    5102 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 18:45:30.072376    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:30.165606    5102 kapi.go:107] duration metric: took 1m38.036053584s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1006 18:45:30.558276    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:31.064370    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:31.559152    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:32.059052    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:32.559117    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:33.059026    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:33.558487    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:34.058268    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:34.558264    5102 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 18:45:35.059337    5102 kapi.go:107] duration metric: took 1m42.504736473s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1006 18:45:44.346181    5102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 18:45:45.411689    5102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.065470802s)
	W1006 18:45:45.411759    5102 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 18:45:45.411855    5102 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1006 18:45:45.456224    5102 out.go:179] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, registry-creds, cloud-spanner, ingress-dns, storage-provisioner, default-storageclass, metrics-server, yakd, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1006 18:45:45.471259    5102 addons.go:514] duration metric: took 1m59.880667657s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin registry-creds cloud-spanner ingress-dns storage-provisioner default-storageclass metrics-server yakd volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1006 18:45:45.471338    5102 start.go:246] waiting for cluster config update ...
	I1006 18:45:45.471360    5102 start.go:255] writing updated cluster config ...
	I1006 18:45:45.472427    5102 ssh_runner.go:195] Run: rm -f paused
	I1006 18:45:45.476964    5102 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 18:45:45.481673    5102 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bx5cf" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:45:45.505271    5102 pod_ready.go:94] pod "coredns-66bc5c9577-bx5cf" is "Ready"
	I1006 18:45:45.505303    5102 pod_ready.go:86] duration metric: took 23.595415ms for pod "coredns-66bc5c9577-bx5cf" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:45:45.508654    5102 pod_ready.go:83] waiting for pod "etcd-addons-442328" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:45:45.513990    5102 pod_ready.go:94] pod "etcd-addons-442328" is "Ready"
	I1006 18:45:45.514022    5102 pod_ready.go:86] duration metric: took 5.340153ms for pod "etcd-addons-442328" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:45:45.517944    5102 pod_ready.go:83] waiting for pod "kube-apiserver-addons-442328" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:45:45.523637    5102 pod_ready.go:94] pod "kube-apiserver-addons-442328" is "Ready"
	I1006 18:45:45.523674    5102 pod_ready.go:86] duration metric: took 5.699574ms for pod "kube-apiserver-addons-442328" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:45:45.526658    5102 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-442328" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:45:45.881700    5102 pod_ready.go:94] pod "kube-controller-manager-addons-442328" is "Ready"
	I1006 18:45:45.881733    5102 pod_ready.go:86] duration metric: took 355.048417ms for pod "kube-controller-manager-addons-442328" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:45:46.081054    5102 pod_ready.go:83] waiting for pod "kube-proxy-n686b" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:45:46.481067    5102 pod_ready.go:94] pod "kube-proxy-n686b" is "Ready"
	I1006 18:45:46.481098    5102 pod_ready.go:86] duration metric: took 400.014045ms for pod "kube-proxy-n686b" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:45:46.681407    5102 pod_ready.go:83] waiting for pod "kube-scheduler-addons-442328" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:45:47.081448    5102 pod_ready.go:94] pod "kube-scheduler-addons-442328" is "Ready"
	I1006 18:45:47.081477    5102 pod_ready.go:86] duration metric: took 400.039212ms for pod "kube-scheduler-addons-442328" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:45:47.081490    5102 pod_ready.go:40] duration metric: took 1.604491048s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 18:45:47.489876    5102 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1006 18:45:47.495228    5102 out.go:179] * Done! kubectl is now configured to use "addons-442328" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 06 18:45:39 addons-442328 crio[833]: time="2025-10-06T18:45:39.891863813Z" level=info msg="Stopped pod sandbox (already stopped): 5c13195b45dc291d012603f5b604e18e7c6ed4748e04ee9fb4dc87935c541c59" id=04906650-85ba-413b-a972-16e41f9debaf name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 06 18:45:39 addons-442328 crio[833]: time="2025-10-06T18:45:39.892304097Z" level=info msg="Removing pod sandbox: 5c13195b45dc291d012603f5b604e18e7c6ed4748e04ee9fb4dc87935c541c59" id=dfa57fb6-ba75-421d-8b44-590cf72a3f07 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 06 18:45:39 addons-442328 crio[833]: time="2025-10-06T18:45:39.896692811Z" level=info msg="Removed pod sandbox: 5c13195b45dc291d012603f5b604e18e7c6ed4748e04ee9fb4dc87935c541c59" id=dfa57fb6-ba75-421d-8b44-590cf72a3f07 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 06 18:45:48 addons-442328 crio[833]: time="2025-10-06T18:45:48.656619222Z" level=info msg="Running pod sandbox: default/busybox/POD" id=7e272779-57bd-4f6e-a158-e864e49cc69e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 18:45:48 addons-442328 crio[833]: time="2025-10-06T18:45:48.656694785Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 18:45:48 addons-442328 crio[833]: time="2025-10-06T18:45:48.664426414Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:977564ff1914fcb45832cca65cc453236c574a3105c1633d2d15e9878999e8e1 UID:ed0705fd-1d38-4f92-9a24-929f35e2a002 NetNS:/var/run/netns/f38a4cc3-16eb-453e-95f2-d47a48fd9c61 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40023f7320}] Aliases:map[]}"
	Oct 06 18:45:48 addons-442328 crio[833]: time="2025-10-06T18:45:48.664478617Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 06 18:45:48 addons-442328 crio[833]: time="2025-10-06T18:45:48.676979206Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:977564ff1914fcb45832cca65cc453236c574a3105c1633d2d15e9878999e8e1 UID:ed0705fd-1d38-4f92-9a24-929f35e2a002 NetNS:/var/run/netns/f38a4cc3-16eb-453e-95f2-d47a48fd9c61 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40023f7320}] Aliases:map[]}"
	Oct 06 18:45:48 addons-442328 crio[833]: time="2025-10-06T18:45:48.677124958Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 06 18:45:48 addons-442328 crio[833]: time="2025-10-06T18:45:48.68076142Z" level=info msg="Ran pod sandbox 977564ff1914fcb45832cca65cc453236c574a3105c1633d2d15e9878999e8e1 with infra container: default/busybox/POD" id=7e272779-57bd-4f6e-a158-e864e49cc69e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 18:45:48 addons-442328 crio[833]: time="2025-10-06T18:45:48.681829071Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1723e22d-9de5-4da4-9bf8-b83f78ce70a3 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 18:45:48 addons-442328 crio[833]: time="2025-10-06T18:45:48.681938966Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=1723e22d-9de5-4da4-9bf8-b83f78ce70a3 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 18:45:48 addons-442328 crio[833]: time="2025-10-06T18:45:48.681982274Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=1723e22d-9de5-4da4-9bf8-b83f78ce70a3 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 18:45:48 addons-442328 crio[833]: time="2025-10-06T18:45:48.682753966Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f0486dfb-60bb-43fe-b80d-42668fd8674f name=/runtime.v1.ImageService/PullImage
	Oct 06 18:45:48 addons-442328 crio[833]: time="2025-10-06T18:45:48.684556894Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 06 18:45:50 addons-442328 crio[833]: time="2025-10-06T18:45:50.489400192Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=f0486dfb-60bb-43fe-b80d-42668fd8674f name=/runtime.v1.ImageService/PullImage
	Oct 06 18:45:50 addons-442328 crio[833]: time="2025-10-06T18:45:50.490011395Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=28f0cd76-7481-4dab-bcdc-a3310104cdec name=/runtime.v1.ImageService/ImageStatus
	Oct 06 18:45:50 addons-442328 crio[833]: time="2025-10-06T18:45:50.491653522Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d96f7b67-0546-4bb7-978c-c2b08779bd7e name=/runtime.v1.ImageService/ImageStatus
	Oct 06 18:45:50 addons-442328 crio[833]: time="2025-10-06T18:45:50.498213865Z" level=info msg="Creating container: default/busybox/busybox" id=bd50d1ef-0313-4416-9c48-75f90a79d503 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 18:45:50 addons-442328 crio[833]: time="2025-10-06T18:45:50.498995059Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 18:45:50 addons-442328 crio[833]: time="2025-10-06T18:45:50.506275023Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 18:45:50 addons-442328 crio[833]: time="2025-10-06T18:45:50.506768798Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 18:45:50 addons-442328 crio[833]: time="2025-10-06T18:45:50.522220085Z" level=info msg="Created container 835e7ff88c2997d33a9803027da8288a9495b29e6f5d5401ae7ec921db4f6150: default/busybox/busybox" id=bd50d1ef-0313-4416-9c48-75f90a79d503 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 18:45:50 addons-442328 crio[833]: time="2025-10-06T18:45:50.524217119Z" level=info msg="Starting container: 835e7ff88c2997d33a9803027da8288a9495b29e6f5d5401ae7ec921db4f6150" id=0448175a-8e2e-4eec-b6b2-9868ac706179 name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 18:45:50 addons-442328 crio[833]: time="2025-10-06T18:45:50.532853581Z" level=info msg="Started container" PID=4963 containerID=835e7ff88c2997d33a9803027da8288a9495b29e6f5d5401ae7ec921db4f6150 description=default/busybox/busybox id=0448175a-8e2e-4eec-b6b2-9868ac706179 name=/runtime.v1.RuntimeService/StartContainer sandboxID=977564ff1914fcb45832cca65cc453236c574a3105c1633d2d15e9878999e8e1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	835e7ff88c299       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          8 seconds ago        Running             busybox                                  0                   977564ff1914f       busybox                                     default
	ebe3bd43a396e       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          24 seconds ago       Running             csi-snapshotter                          0                   e2b2f3f9b3ef2       csi-hostpathplugin-g7kvd                    kube-system
	4cd182a7b457f       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          25 seconds ago       Running             csi-provisioner                          0                   e2b2f3f9b3ef2       csi-hostpathplugin-g7kvd                    kube-system
	08ff7320e4ddc       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            26 seconds ago       Running             liveness-probe                           0                   e2b2f3f9b3ef2       csi-hostpathplugin-g7kvd                    kube-system
	9ca99ca5ed94b       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           27 seconds ago       Running             hostpath                                 0                   e2b2f3f9b3ef2       csi-hostpathplugin-g7kvd                    kube-system
	55e40e0693ffa       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             29 seconds ago       Running             controller                               0                   55b478414d411       ingress-nginx-controller-675c5ddd98-55rt2   ingress-nginx
	abd23f19f3d54       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 35 seconds ago       Running             gcp-auth                                 0                   67a9cd802d4cd       gcp-auth-78565c9fb4-5qbhb                   gcp-auth
	14f703b6467c6       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                38 seconds ago       Running             node-driver-registrar                    0                   e2b2f3f9b3ef2       csi-hostpathplugin-g7kvd                    kube-system
	7d6f337fccbd5       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:74b72c3673aff7e1fa7c3ebae80b5dbe5446ce1906ef8d4f98d4b9f6e72c88e1                            39 seconds ago       Running             gadget                                   0                   05450c520b570       gadget-6t8nb                                gadget
	6172f7270c3d6       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              43 seconds ago       Running             registry-proxy                           0                   15431d9de0947       registry-proxy-zqdsp                        kube-system
	400a8af72e5ff       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   46 seconds ago       Exited              patch                                    0                   60ce7451efd4c       ingress-nginx-admission-patch-7ksts         ingress-nginx
	8b64f9fe4af61       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     46 seconds ago       Running             nvidia-device-plugin-ctr                 0                   497a91a0ca1cc       nvidia-device-plugin-daemonset-2ptdw        kube-system
	db7b96ba901f3       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             46 seconds ago       Exited              patch                                    2                   47254b5190bf6       gcp-auth-certs-patch-b4lwl                  gcp-auth
	aa4f571114f60       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   e2b2f3f9b3ef2       csi-hostpathplugin-g7kvd                    kube-system
	95373c9fa405f       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   875a8baf32253       local-path-provisioner-648f6765c9-lgzk2     local-path-storage
	c72005fae6fa3       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   7268c15204001       yakd-dashboard-5ff678cb9-ffptf              yakd-dashboard
	e949d4e668015       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   942b2f7e1d62f       csi-hostpath-resizer-0                      kube-system
	ae20a5867a78c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              create                                   0                   91934e9602ec3       ingress-nginx-admission-create-xspd2        ingress-nginx
	eaea2302c49ad       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   897a1cb447e18       metrics-server-85b7d694d7-swsbd             kube-system
	156dcf2838ea3       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   447d906949aa5       snapshot-controller-7d9fbc56b8-glwdc        kube-system
	f77ff0e3011bc       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   69bcfd32a0d18       snapshot-controller-7d9fbc56b8-jg8hf        kube-system
	47db75a0747be       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   26105da3f9b8e       csi-hostpath-attacher-0                     kube-system
	c540f260b155a       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   b7ed222a8ba2f       kube-ingress-dns-minikube                   kube-system
	d6ed31f83769e       gcr.io/cloud-spanner-emulator/emulator@sha256:77d0cd8103fe32875bbb04c070a7d1db292093b65d11c99c00cf39e8a13852f5                               About a minute ago   Running             cloud-spanner-emulator                   0                   f499bced467a0       cloud-spanner-emulator-85f6b7fc65-fcmlt     default
	d982508a1963a       docker.io/library/registry@sha256:f26c394e5b7c3a707c7373c3e9388e44f0d5bdd3def19652c6bd2ac1a0fa6758                                           About a minute ago   Running             registry                                 0                   3b74f761b66c0       registry-66898fdd98-k4cb6                   kube-system
	4a454595ca74d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   8f00c80f9e94d       storage-provisioner                         kube-system
	1ee286915381b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   f2232b4fa0b3c       coredns-66bc5c9577-bx5cf                    kube-system
	50d122dfe0df6       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   d621058fceeef       kube-proxy-n686b                            kube-system
	5c527c40e2db3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   0a6579d0c7a4d       kindnet-g2tkh                               kube-system
	8f11330482798       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   1ca73eb868415       kube-apiserver-addons-442328                kube-system
	4b78f6f126d3c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   0580ee0ec7562       kube-scheduler-addons-442328                kube-system
	c436c27fc179e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   1f2cd306074d5       etcd-addons-442328                          kube-system
	9bb11c78f5525       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   c65cfdb6c3e2f       kube-controller-manager-addons-442328       kube-system
	
	
	==> coredns [1ee286915381b5158fa5a27a330f7e53f554ce0fe7d429c57ec3ea2868775dc6] <==
	[INFO] 10.244.0.17:35269 - 12171 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000170433s
	[INFO] 10.244.0.17:35269 - 58195 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002890402s
	[INFO] 10.244.0.17:35269 - 1420 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.003053911s
	[INFO] 10.244.0.17:35269 - 38230 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00012368s
	[INFO] 10.244.0.17:35269 - 56459 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000079896s
	[INFO] 10.244.0.17:34847 - 21209 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000198743s
	[INFO] 10.244.0.17:34847 - 20747 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000086469s
	[INFO] 10.244.0.17:60064 - 49807 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091802s
	[INFO] 10.244.0.17:60064 - 49626 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000219067s
	[INFO] 10.244.0.17:34007 - 1338 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000113915s
	[INFO] 10.244.0.17:34007 - 1141 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000141527s
	[INFO] 10.244.0.17:51545 - 34372 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001577599s
	[INFO] 10.244.0.17:51545 - 34192 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00161138s
	[INFO] 10.244.0.17:46963 - 36903 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00012651s
	[INFO] 10.244.0.17:46963 - 36755 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000183529s
	[INFO] 10.244.0.20:52334 - 63421 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000271122s
	[INFO] 10.244.0.20:36291 - 10235 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000153826s
	[INFO] 10.244.0.20:44980 - 30076 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000255778s
	[INFO] 10.244.0.20:52634 - 5665 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000155459s
	[INFO] 10.244.0.20:39976 - 62810 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000239712s
	[INFO] 10.244.0.20:57460 - 41861 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000181306s
	[INFO] 10.244.0.20:50529 - 23139 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002088884s
	[INFO] 10.244.0.20:44821 - 2583 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002507326s
	[INFO] 10.244.0.20:39688 - 51821 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.002640901s
	[INFO] 10.244.0.20:40418 - 26182 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002897491s
	
	
	==> describe nodes <==
	Name:               addons-442328
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-442328
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81
	                    minikube.k8s.io/name=addons-442328
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_06T18_43_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-442328
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-442328"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 06 Oct 2025 18:43:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-442328
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 06 Oct 2025 18:45:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 06 Oct 2025 18:45:42 +0000   Mon, 06 Oct 2025 18:43:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 06 Oct 2025 18:45:42 +0000   Mon, 06 Oct 2025 18:43:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 06 Oct 2025 18:45:42 +0000   Mon, 06 Oct 2025 18:43:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 06 Oct 2025 18:45:42 +0000   Mon, 06 Oct 2025 18:44:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-442328
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 2553b47105a84e17acadae7422faa4a6
	  System UUID:                f9ea306a-7c47-4dcd-b3b3-b1912080fbb2
	  Boot ID:                    28123a58-a718-41b5-bac7-da83ff2d3a0d
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     cloud-spanner-emulator-85f6b7fc65-fcmlt      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  gadget                      gadget-6t8nb                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m8s
	  gcp-auth                    gcp-auth-78565c9fb4-5qbhb                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-55rt2    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m7s
	  kube-system                 coredns-66bc5c9577-bx5cf                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m13s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 csi-hostpathplugin-g7kvd                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 etcd-addons-442328                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m18s
	  kube-system                 kindnet-g2tkh                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m14s
	  kube-system                 kube-apiserver-addons-442328                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-controller-manager-addons-442328        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-proxy-n686b                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-scheduler-addons-442328                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 metrics-server-85b7d694d7-swsbd              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m8s
	  kube-system                 nvidia-device-plugin-daemonset-2ptdw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 registry-66898fdd98-k4cb6                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 registry-creds-764b6fb674-pgnhk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 registry-proxy-zqdsp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 snapshot-controller-7d9fbc56b8-glwdc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 snapshot-controller-7d9fbc56b8-jg8hf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m8s
	  local-path-storage          local-path-provisioner-648f6765c9-lgzk2      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m8s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-ffptf               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m11s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  2m25s (x8 over 2m25s)  kubelet          Node addons-442328 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m25s (x8 over 2m25s)  kubelet          Node addons-442328 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m25s (x8 over 2m25s)  kubelet          Node addons-442328 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m19s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m19s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m18s                  kubelet          Node addons-442328 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m18s                  kubelet          Node addons-442328 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m18s                  kubelet          Node addons-442328 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m14s                  node-controller  Node addons-442328 event: Registered Node addons-442328 in Controller
	  Normal   NodeReady                92s                    kubelet          Node addons-442328 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 6 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015541] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.518273] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033731] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.758438] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.412532] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 6 18:43] overlayfs: idmapped layers are currently not supported
	[  +0.067491] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [c436c27fc179e1d50b93c6d7a510f2ccc24e197558a2826868724e75f1d2d96f] <==
	{"level":"warn","ts":"2025-10-06T18:43:36.078136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.113890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.137239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.172655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.185158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.208974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.226882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.271511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.272365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.284386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.298584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.318314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.344566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.364165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.387643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.413811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.432890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.448754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:36.547767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:52.768568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:43:52.790705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:44:14.328400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:44:14.342776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:44:14.367987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:44:14.383177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44600","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [abd23f19f3d541ab9d91e25a7ddfe45774c89bb4aeb4147381d6afe6e6f4c94c] <==
	2025/10/06 18:45:23 GCP Auth Webhook started!
	2025/10/06 18:45:48 Ready to marshal response ...
	2025/10/06 18:45:48 Ready to write response ...
	2025/10/06 18:45:48 Ready to marshal response ...
	2025/10/06 18:45:48 Ready to write response ...
	2025/10/06 18:45:48 Ready to marshal response ...
	2025/10/06 18:45:48 Ready to write response ...
	
	
	==> kernel <==
	 18:45:59 up 28 min,  0 user,  load average: 2.22, 1.35, 0.55
	Linux addons-442328 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5c527c40e2db3acd1fa72c315739a326754a5efad7aa6b520cc2b554f0eb3c21] <==
	E1006 18:44:16.450111       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1006 18:44:17.949281       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1006 18:44:17.949320       1 metrics.go:72] Registering metrics
	I1006 18:44:17.949401       1 controller.go:711] "Syncing nftables rules"
	E1006 18:44:17.949776       1 controller.go:417] "reading nfqueue stats" err="open /proc/net/netfilter/nfnetlink_queue: no such file or directory"
	I1006 18:44:26.451836       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 18:44:26.451959       1 main.go:301] handling current node
	I1006 18:44:36.450030       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 18:44:36.450065       1 main.go:301] handling current node
	I1006 18:44:46.449016       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 18:44:46.449045       1 main.go:301] handling current node
	I1006 18:44:56.448012       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 18:44:56.448048       1 main.go:301] handling current node
	I1006 18:45:06.451829       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 18:45:06.451861       1 main.go:301] handling current node
	I1006 18:45:16.448593       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 18:45:16.448678       1 main.go:301] handling current node
	I1006 18:45:26.451906       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 18:45:26.451946       1 main.go:301] handling current node
	I1006 18:45:36.448958       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 18:45:36.449044       1 main.go:301] handling current node
	I1006 18:45:46.450250       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 18:45:46.450285       1 main.go:301] handling current node
	I1006 18:45:56.451779       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 18:45:56.451811       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8f11330482798f65fa8bce0a640073ba69894daff8d3b583e82b1d331b4098ea] <==
	E1006 18:44:26.690887       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.94.49:443: connect: connection refused" logger="UnhandledError"
	W1006 18:44:26.689700       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.94.49:443: connect: connection refused
	E1006 18:44:26.692264       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.94.49:443: connect: connection refused" logger="UnhandledError"
	W1006 18:44:26.772122       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.94.49:443: connect: connection refused
	E1006 18:44:26.772165       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.94.49:443: connect: connection refused" logger="UnhandledError"
	W1006 18:44:51.551568       1 handler_proxy.go:99] no RequestInfo found in the context
	E1006 18:44:51.551624       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1006 18:44:51.551638       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1006 18:44:51.556686       1 handler_proxy.go:99] no RequestInfo found in the context
	E1006 18:44:51.556774       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1006 18:44:51.556785       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1006 18:44:58.451396       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.134.170:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.134.170:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.134.170:443: connect: connection refused" logger="UnhandledError"
	W1006 18:44:58.451981       1 handler_proxy.go:99] no RequestInfo found in the context
	E1006 18:44:58.452054       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1006 18:44:58.453649       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.134.170:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.134.170:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.134.170:443: connect: connection refused" logger="UnhandledError"
	E1006 18:44:58.477230       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.134.170:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.134.170:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.134.170:443: connect: connection refused" logger="UnhandledError"
	I1006 18:44:58.630548       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1006 18:45:56.624235       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58670: use of closed network connection
	E1006 18:45:56.868923       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58694: use of closed network connection
	
	
	==> kube-controller-manager [9bb11c78f552514a88dbee09642e0026e29ed4e8e4982ce305d891d657f27d0b] <==
	I1006 18:43:44.302483       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1006 18:43:44.316677       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1006 18:43:44.323847       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1006 18:43:44.323920       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1006 18:43:44.323941       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1006 18:43:44.323955       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1006 18:43:44.323962       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1006 18:43:44.330437       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1006 18:43:44.332816       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-442328" podCIDRs=["10.244.0.0/24"]
	I1006 18:43:44.341730       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1006 18:43:44.344049       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 18:43:44.348677       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 18:43:44.348757       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1006 18:43:44.348772       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1006 18:43:50.664411       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1006 18:44:14.321562       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1006 18:44:14.321697       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1006 18:44:14.321750       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1006 18:44:14.351883       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1006 18:44:14.355819       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1006 18:44:14.422377       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1006 18:44:14.456110       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 18:44:29.295872       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1006 18:44:44.431452       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1006 18:44:44.491422       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [50d122dfe0df6b0e19a330b8d906a26a7160818c3c2342f861b94db61dbde4ce] <==
	I1006 18:43:46.968787       1 server_linux.go:53] "Using iptables proxy"
	I1006 18:43:47.050311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1006 18:43:47.151174       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1006 18:43:47.151216       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1006 18:43:47.151311       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1006 18:43:47.249157       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 18:43:47.249199       1 server_linux.go:132] "Using iptables Proxier"
	I1006 18:43:47.255179       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1006 18:43:47.265431       1 server.go:527] "Version info" version="v1.34.1"
	I1006 18:43:47.265458       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 18:43:47.268782       1 config.go:200] "Starting service config controller"
	I1006 18:43:47.268794       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1006 18:43:47.268816       1 config.go:106] "Starting endpoint slice config controller"
	I1006 18:43:47.268822       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1006 18:43:47.268840       1 config.go:403] "Starting serviceCIDR config controller"
	I1006 18:43:47.268844       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1006 18:43:47.270719       1 config.go:309] "Starting node config controller"
	I1006 18:43:47.270728       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1006 18:43:47.270735       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1006 18:43:47.396351       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1006 18:43:47.396385       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1006 18:43:47.396414       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4b78f6f126d3c9aae8097883cf29eb9096f58eed0eade4dc15720c72d0617907] <==
	E1006 18:43:37.380956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1006 18:43:37.381054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1006 18:43:37.381124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1006 18:43:37.384051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1006 18:43:37.384219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1006 18:43:37.384314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1006 18:43:37.386615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1006 18:43:37.386831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1006 18:43:37.386948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1006 18:43:37.387051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1006 18:43:37.389000       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1006 18:43:37.389214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1006 18:43:37.389344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1006 18:43:37.389501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1006 18:43:37.389608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1006 18:43:38.206261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1006 18:43:38.284336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1006 18:43:38.320585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1006 18:43:38.341045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1006 18:43:38.341657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1006 18:43:38.399042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1006 18:43:38.481764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1006 18:43:38.485828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1006 18:43:38.514705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1006 18:43:39.071359       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 06 18:45:13 addons-442328 kubelet[1264]: I1006 18:45:13.837476    1264 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6a65102-a041-4f5c-b6d7-91fcaf170b20-kube-api-access-dm998" (OuterVolumeSpecName: "kube-api-access-dm998") pod "c6a65102-a041-4f5c-b6d7-91fcaf170b20" (UID: "c6a65102-a041-4f5c-b6d7-91fcaf170b20"). InnerVolumeSpecName "kube-api-access-dm998". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 06 18:45:13 addons-442328 kubelet[1264]: I1006 18:45:13.837678    1264 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a96af51b-8bac-4c84-932b-532fcc3b5be7-kube-api-access-lxzvg" (OuterVolumeSpecName: "kube-api-access-lxzvg") pod "a96af51b-8bac-4c84-932b-532fcc3b5be7" (UID: "a96af51b-8bac-4c84-932b-532fcc3b5be7"). InnerVolumeSpecName "kube-api-access-lxzvg". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 06 18:45:13 addons-442328 kubelet[1264]: I1006 18:45:13.935620    1264 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dm998\" (UniqueName: \"kubernetes.io/projected/c6a65102-a041-4f5c-b6d7-91fcaf170b20-kube-api-access-dm998\") on node \"addons-442328\" DevicePath \"\""
	Oct 06 18:45:13 addons-442328 kubelet[1264]: I1006 18:45:13.935665    1264 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lxzvg\" (UniqueName: \"kubernetes.io/projected/a96af51b-8bac-4c84-932b-532fcc3b5be7-kube-api-access-lxzvg\") on node \"addons-442328\" DevicePath \"\""
	Oct 06 18:45:14 addons-442328 kubelet[1264]: I1006 18:45:14.614148    1264 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60ce7451efd4cc6e7e8c8e25858a8d3e0e540907bf6a9ee01cd2e16bfcc2283a"
	Oct 06 18:45:14 addons-442328 kubelet[1264]: I1006 18:45:14.618826    1264 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47254b5190bf67c2d91189f92884240055fe3eb615243ea54e8c75dfa2d5ddd3"
	Oct 06 18:45:15 addons-442328 kubelet[1264]: I1006 18:45:15.624409    1264 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-zqdsp" secret="" err="secret \"gcp-auth\" not found"
	Oct 06 18:45:16 addons-442328 kubelet[1264]: I1006 18:45:16.633954    1264 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-zqdsp" secret="" err="secret \"gcp-auth\" not found"
	Oct 06 18:45:19 addons-442328 kubelet[1264]: I1006 18:45:19.700586    1264 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-6t8nb" podStartSLOduration=65.370030485 podStartE2EDuration="1m29.700566299s" podCreationTimestamp="2025-10-06 18:43:50 +0000 UTC" firstStartedPulling="2025-10-06 18:44:54.615622989 +0000 UTC m=+74.877389398" lastFinishedPulling="2025-10-06 18:45:18.946158803 +0000 UTC m=+99.207925212" observedRunningTime="2025-10-06 18:45:19.698037303 +0000 UTC m=+99.959803737" watchObservedRunningTime="2025-10-06 18:45:19.700566299 +0000 UTC m=+99.962332708"
	Oct 06 18:45:19 addons-442328 kubelet[1264]: I1006 18:45:19.701226    1264 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-zqdsp" podStartSLOduration=6.735344969 podStartE2EDuration="53.701216419s" podCreationTimestamp="2025-10-06 18:44:26 +0000 UTC" firstStartedPulling="2025-10-06 18:44:28.358821829 +0000 UTC m=+48.620588238" lastFinishedPulling="2025-10-06 18:45:15.324693279 +0000 UTC m=+95.586459688" observedRunningTime="2025-10-06 18:45:15.644444435 +0000 UTC m=+95.906210868" watchObservedRunningTime="2025-10-06 18:45:19.701216419 +0000 UTC m=+99.962982828"
	Oct 06 18:45:23 addons-442328 kubelet[1264]: I1006 18:45:23.695045    1264 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-5qbhb" podStartSLOduration=64.548130062 podStartE2EDuration="1m28.695014003s" podCreationTimestamp="2025-10-06 18:43:55 +0000 UTC" firstStartedPulling="2025-10-06 18:44:59.085361804 +0000 UTC m=+79.347128213" lastFinishedPulling="2025-10-06 18:45:23.232245737 +0000 UTC m=+103.494012154" observedRunningTime="2025-10-06 18:45:23.692038093 +0000 UTC m=+103.953804526" watchObservedRunningTime="2025-10-06 18:45:23.695014003 +0000 UTC m=+103.956780412"
	Oct 06 18:45:29 addons-442328 kubelet[1264]: I1006 18:45:29.716459    1264 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-55rt2" podStartSLOduration=68.489085098 podStartE2EDuration="1m38.716442137s" podCreationTimestamp="2025-10-06 18:43:51 +0000 UTC" firstStartedPulling="2025-10-06 18:44:59.260057544 +0000 UTC m=+79.521823961" lastFinishedPulling="2025-10-06 18:45:29.487414591 +0000 UTC m=+109.749181000" observedRunningTime="2025-10-06 18:45:29.715972609 +0000 UTC m=+109.977739026" watchObservedRunningTime="2025-10-06 18:45:29.716442137 +0000 UTC m=+109.978208554"
	Oct 06 18:45:30 addons-442328 kubelet[1264]: E1006 18:45:30.684023    1264 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 06 18:45:30 addons-442328 kubelet[1264]: E1006 18:45:30.684102    1264 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ae00ec6-941d-4df4-a11f-08481b31d714-gcr-creds podName:7ae00ec6-941d-4df4-a11f-08481b31d714 nodeName:}" failed. No retries permitted until 2025-10-06 18:46:34.684081513 +0000 UTC m=+174.945847930 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/7ae00ec6-941d-4df4-a11f-08481b31d714-gcr-creds") pod "registry-creds-764b6fb674-pgnhk" (UID: "7ae00ec6-941d-4df4-a11f-08481b31d714") : secret "registry-creds-gcr" not found
	Oct 06 18:45:31 addons-442328 kubelet[1264]: I1006 18:45:31.874015    1264 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e14d911-0d21-4971-a02b-9de693a14e53" path="/var/lib/kubelet/pods/3e14d911-0d21-4971-a02b-9de693a14e53/volumes"
	Oct 06 18:45:32 addons-442328 kubelet[1264]: I1006 18:45:32.117055    1264 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 06 18:45:32 addons-442328 kubelet[1264]: I1006 18:45:32.117106    1264 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 06 18:45:34 addons-442328 kubelet[1264]: I1006 18:45:34.750366    1264 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-g7kvd" podStartSLOduration=2.148969758 podStartE2EDuration="1m8.750348635s" podCreationTimestamp="2025-10-06 18:44:26 +0000 UTC" firstStartedPulling="2025-10-06 18:44:27.839597643 +0000 UTC m=+48.101364052" lastFinishedPulling="2025-10-06 18:45:34.44097652 +0000 UTC m=+114.702742929" observedRunningTime="2025-10-06 18:45:34.748524702 +0000 UTC m=+115.010291119" watchObservedRunningTime="2025-10-06 18:45:34.750348635 +0000 UTC m=+115.012115044"
	Oct 06 18:45:39 addons-442328 kubelet[1264]: I1006 18:45:39.880612    1264 scope.go:117] "RemoveContainer" containerID="89f2db68cbdbbc351cbb107b8afd4a9e3211a03d3f65db93d1209bbee886ae43"
	Oct 06 18:45:40 addons-442328 kubelet[1264]: E1006 18:45:40.007174    1264 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/6686cfb10051959c7e29adfdf1d6d5b4fb7abd046889e976aabf131b7f0a37ea/diff" to get inode usage: stat /var/lib/containers/storage/overlay/6686cfb10051959c7e29adfdf1d6d5b4fb7abd046889e976aabf131b7f0a37ea/diff: no such file or directory, extraDiskErr: <nil>
	Oct 06 18:45:45 addons-442328 kubelet[1264]: I1006 18:45:45.873734    1264 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a96af51b-8bac-4c84-932b-532fcc3b5be7" path="/var/lib/kubelet/pods/a96af51b-8bac-4c84-932b-532fcc3b5be7/volumes"
	Oct 06 18:45:48 addons-442328 kubelet[1264]: I1006 18:45:48.438818    1264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hkgm\" (UniqueName: \"kubernetes.io/projected/ed0705fd-1d38-4f92-9a24-929f35e2a002-kube-api-access-7hkgm\") pod \"busybox\" (UID: \"ed0705fd-1d38-4f92-9a24-929f35e2a002\") " pod="default/busybox"
	Oct 06 18:45:48 addons-442328 kubelet[1264]: I1006 18:45:48.439361    1264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ed0705fd-1d38-4f92-9a24-929f35e2a002-gcp-creds\") pod \"busybox\" (UID: \"ed0705fd-1d38-4f92-9a24-929f35e2a002\") " pod="default/busybox"
	Oct 06 18:45:48 addons-442328 kubelet[1264]: W1006 18:45:48.679026    1264 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8c722e206d4345afe9840c25fc51924a9dba29c6ed70fc1f945ffae17f5dfd27/crio-977564ff1914fcb45832cca65cc453236c574a3105c1633d2d15e9878999e8e1 WatchSource:0}: Error finding container 977564ff1914fcb45832cca65cc453236c574a3105c1633d2d15e9878999e8e1: Status 404 returned error can't find the container with id 977564ff1914fcb45832cca65cc453236c574a3105c1633d2d15e9878999e8e1
	Oct 06 18:45:56 addons-442328 kubelet[1264]: I1006 18:45:56.870748    1264 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-k4cb6" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [4a454595ca74de219ba83b946bf8f8f21366c2454667c8f983e1248a703aee58] <==
	W1006 18:45:34.671465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:45:36.674223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:45:36.680807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:45:38.684491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:45:38.690035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:45:40.693620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:45:40.699690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:45:42.703185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:45:42.707809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:45:44.716959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:45:44.725281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:45:46.728194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:45:46.733430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:45:48.737182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:45:48.745946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:45:50.749592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:45:50.755898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:45:52.759110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:45:52.766029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:45:54.769342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:45:54.773714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:45:56.777733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:45:56.785510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:45:58.796740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 18:45:58.811537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-442328 -n addons-442328
helpers_test.go:269: (dbg) Run:  kubectl --context addons-442328 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-xspd2 ingress-nginx-admission-patch-7ksts registry-creds-764b6fb674-pgnhk
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-442328 describe pod ingress-nginx-admission-create-xspd2 ingress-nginx-admission-patch-7ksts registry-creds-764b6fb674-pgnhk
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-442328 describe pod ingress-nginx-admission-create-xspd2 ingress-nginx-admission-patch-7ksts registry-creds-764b6fb674-pgnhk: exit status 1 (268.579206ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-xspd2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-7ksts" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-pgnhk" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-442328 describe pod ingress-nginx-admission-create-xspd2 ingress-nginx-admission-patch-7ksts registry-creds-764b6fb674-pgnhk: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-442328 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-442328 addons disable headlamp --alsologtostderr -v=1: exit status 11 (338.946159ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 18:46:00.434485   11694 out.go:360] Setting OutFile to fd 1 ...
	I1006 18:46:00.435911   11694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:46:00.435927   11694 out.go:374] Setting ErrFile to fd 2...
	I1006 18:46:00.435934   11694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:46:00.436373   11694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 18:46:00.436869   11694 mustload.go:65] Loading cluster: addons-442328
	I1006 18:46:00.437547   11694 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:46:00.437562   11694 addons.go:606] checking whether the cluster is paused
	I1006 18:46:00.437692   11694 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:46:00.437714   11694 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:46:00.438426   11694 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:46:00.487602   11694 ssh_runner.go:195] Run: systemctl --version
	I1006 18:46:00.487734   11694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:46:00.509647   11694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:46:00.610559   11694 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 18:46:00.610643   11694 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 18:46:00.642039   11694 cri.go:89] found id: "ebe3bd43a396ef825f9ce2383a8f854705eb0fe8cc731998dc8e54caa0c556da"
	I1006 18:46:00.642073   11694 cri.go:89] found id: "4cd182a7b457fdd19f5abf043db3fae987b3a227baa70e7cd0251be02e768d79"
	I1006 18:46:00.642079   11694 cri.go:89] found id: "08ff7320e4ddcb6923c21c744037216228cafd072dc09cfe448a83bc44918154"
	I1006 18:46:00.642083   11694 cri.go:89] found id: "9ca99ca5ed94b679d8bf6fa05d9e992d767dbca537ff3524c1ba82747167969d"
	I1006 18:46:00.642086   11694 cri.go:89] found id: "14f703b6467c68e5f7dedd7a8ccf32a80265d117ab37abf1d2666a32caf5beda"
	I1006 18:46:00.642090   11694 cri.go:89] found id: "6172f7270c3d65c827bb8fa830d763bd379cc07be16d284bc4c59a48ef10709c"
	I1006 18:46:00.642112   11694 cri.go:89] found id: "8b64f9fe4af619af52e8d4111939a9703e02a5d3f0565c752bf3a85d2aee65ec"
	I1006 18:46:00.642119   11694 cri.go:89] found id: "aa4f571114f60f2401085b3f65a45f9b7779dbaf2d7716f39c56f2c69f8f358f"
	I1006 18:46:00.642127   11694 cri.go:89] found id: "e949d4e668015edc95cab568d722d15a1fd49365b917c054c9a337a848a93ac9"
	I1006 18:46:00.642142   11694 cri.go:89] found id: "eaea2302c49adf3c43a2f0f249dc8fe988f23818d829a10e9709a6f20858af7b"
	I1006 18:46:00.642151   11694 cri.go:89] found id: "156dcf2838ea3731822ceba3a07277518b4ba34e9c5891fb49941d98f1c8b0f7"
	I1006 18:46:00.642155   11694 cri.go:89] found id: "f77ff0e3011bc38bba6de70627d082e5a5194d7fefb407f7b46b7b0c832849e8"
	I1006 18:46:00.642158   11694 cri.go:89] found id: "47db75a0747be97ad93e878b9fbe0acf228502f6a1061340fa99f4c190a83619"
	I1006 18:46:00.642161   11694 cri.go:89] found id: "c540f260b155ab1508ef8fdf1254b0816d0e606eb67ae59f3b2feab7ea526b88"
	I1006 18:46:00.642164   11694 cri.go:89] found id: "d982508a1963aa0737bdefbe4ed7bcad12d4779b68c0ee3b75aa5718e1e8cd6f"
	I1006 18:46:00.642173   11694 cri.go:89] found id: "4a454595ca74de219ba83b946bf8f8f21366c2454667c8f983e1248a703aee58"
	I1006 18:46:00.642184   11694 cri.go:89] found id: "1ee286915381b5158fa5a27a330f7e53f554ce0fe7d429c57ec3ea2868775dc6"
	I1006 18:46:00.642189   11694 cri.go:89] found id: "50d122dfe0df6b0e19a330b8d906a26a7160818c3c2342f861b94db61dbde4ce"
	I1006 18:46:00.642192   11694 cri.go:89] found id: "5c527c40e2db3acd1fa72c315739a326754a5efad7aa6b520cc2b554f0eb3c21"
	I1006 18:46:00.642195   11694 cri.go:89] found id: "8f11330482798f65fa8bce0a640073ba69894daff8d3b583e82b1d331b4098ea"
	I1006 18:46:00.642200   11694 cri.go:89] found id: "4b78f6f126d3c9aae8097883cf29eb9096f58eed0eade4dc15720c72d0617907"
	I1006 18:46:00.642205   11694 cri.go:89] found id: "c436c27fc179e1d50b93c6d7a510f2ccc24e197558a2826868724e75f1d2d96f"
	I1006 18:46:00.642210   11694 cri.go:89] found id: "9bb11c78f552514a88dbee09642e0026e29ed4e8e4982ce305d891d657f27d0b"
	I1006 18:46:00.642213   11694 cri.go:89] found id: ""
	I1006 18:46:00.642271   11694 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 18:46:00.657624   11694 out.go:203] 
	W1006 18:46:00.660585   11694 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:46:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:46:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1006 18:46:00.660617   11694 out.go:285] * 
	* 
	W1006 18:46:00.664379   11694 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 18:46:00.667397   11694 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-442328 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.42s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-fcmlt" [d99e811a-ce6b-4685-aeea-585598b04b1e] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003104956s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-442328 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-442328 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (254.362128ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 18:47:13.493990   13609 out.go:360] Setting OutFile to fd 1 ...
	I1006 18:47:13.494236   13609 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:47:13.494272   13609 out.go:374] Setting ErrFile to fd 2...
	I1006 18:47:13.494292   13609 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:47:13.494824   13609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 18:47:13.495175   13609 mustload.go:65] Loading cluster: addons-442328
	I1006 18:47:13.495658   13609 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:47:13.495736   13609 addons.go:606] checking whether the cluster is paused
	I1006 18:47:13.495877   13609 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:47:13.495920   13609 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:47:13.496423   13609 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:47:13.515176   13609 ssh_runner.go:195] Run: systemctl --version
	I1006 18:47:13.515236   13609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:47:13.533128   13609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:47:13.630565   13609 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 18:47:13.630666   13609 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 18:47:13.659621   13609 cri.go:89] found id: "ebe3bd43a396ef825f9ce2383a8f854705eb0fe8cc731998dc8e54caa0c556da"
	I1006 18:47:13.659640   13609 cri.go:89] found id: "4cd182a7b457fdd19f5abf043db3fae987b3a227baa70e7cd0251be02e768d79"
	I1006 18:47:13.659645   13609 cri.go:89] found id: "08ff7320e4ddcb6923c21c744037216228cafd072dc09cfe448a83bc44918154"
	I1006 18:47:13.659649   13609 cri.go:89] found id: "9ca99ca5ed94b679d8bf6fa05d9e992d767dbca537ff3524c1ba82747167969d"
	I1006 18:47:13.659657   13609 cri.go:89] found id: "14f703b6467c68e5f7dedd7a8ccf32a80265d117ab37abf1d2666a32caf5beda"
	I1006 18:47:13.659661   13609 cri.go:89] found id: "6172f7270c3d65c827bb8fa830d763bd379cc07be16d284bc4c59a48ef10709c"
	I1006 18:47:13.659664   13609 cri.go:89] found id: "8b64f9fe4af619af52e8d4111939a9703e02a5d3f0565c752bf3a85d2aee65ec"
	I1006 18:47:13.659667   13609 cri.go:89] found id: "aa4f571114f60f2401085b3f65a45f9b7779dbaf2d7716f39c56f2c69f8f358f"
	I1006 18:47:13.659670   13609 cri.go:89] found id: "e949d4e668015edc95cab568d722d15a1fd49365b917c054c9a337a848a93ac9"
	I1006 18:47:13.659677   13609 cri.go:89] found id: "eaea2302c49adf3c43a2f0f249dc8fe988f23818d829a10e9709a6f20858af7b"
	I1006 18:47:13.659680   13609 cri.go:89] found id: "156dcf2838ea3731822ceba3a07277518b4ba34e9c5891fb49941d98f1c8b0f7"
	I1006 18:47:13.659684   13609 cri.go:89] found id: "f77ff0e3011bc38bba6de70627d082e5a5194d7fefb407f7b46b7b0c832849e8"
	I1006 18:47:13.659687   13609 cri.go:89] found id: "47db75a0747be97ad93e878b9fbe0acf228502f6a1061340fa99f4c190a83619"
	I1006 18:47:13.659690   13609 cri.go:89] found id: "c540f260b155ab1508ef8fdf1254b0816d0e606eb67ae59f3b2feab7ea526b88"
	I1006 18:47:13.659693   13609 cri.go:89] found id: "d982508a1963aa0737bdefbe4ed7bcad12d4779b68c0ee3b75aa5718e1e8cd6f"
	I1006 18:47:13.659746   13609 cri.go:89] found id: "4a454595ca74de219ba83b946bf8f8f21366c2454667c8f983e1248a703aee58"
	I1006 18:47:13.659751   13609 cri.go:89] found id: "1ee286915381b5158fa5a27a330f7e53f554ce0fe7d429c57ec3ea2868775dc6"
	I1006 18:47:13.659756   13609 cri.go:89] found id: "50d122dfe0df6b0e19a330b8d906a26a7160818c3c2342f861b94db61dbde4ce"
	I1006 18:47:13.659759   13609 cri.go:89] found id: "5c527c40e2db3acd1fa72c315739a326754a5efad7aa6b520cc2b554f0eb3c21"
	I1006 18:47:13.659762   13609 cri.go:89] found id: "8f11330482798f65fa8bce0a640073ba69894daff8d3b583e82b1d331b4098ea"
	I1006 18:47:13.659767   13609 cri.go:89] found id: "4b78f6f126d3c9aae8097883cf29eb9096f58eed0eade4dc15720c72d0617907"
	I1006 18:47:13.659770   13609 cri.go:89] found id: "c436c27fc179e1d50b93c6d7a510f2ccc24e197558a2826868724e75f1d2d96f"
	I1006 18:47:13.659773   13609 cri.go:89] found id: "9bb11c78f552514a88dbee09642e0026e29ed4e8e4982ce305d891d657f27d0b"
	I1006 18:47:13.659776   13609 cri.go:89] found id: ""
	I1006 18:47:13.659825   13609 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 18:47:13.674521   13609 out.go:203] 
	W1006 18:47:13.677415   13609 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:47:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:47:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1006 18:47:13.677459   13609 out.go:285] * 
	* 
	W1006 18:47:13.681638   13609 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 18:47:13.684545   13609 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-442328 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.41s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-442328 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-442328 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-442328 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [129aed68-505d-4434-af48-55c935852350] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [129aed68-505d-4434-af48-55c935852350] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [129aed68-505d-4434-af48-55c935852350] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003557786s
addons_test.go:967: (dbg) Run:  kubectl --context addons-442328 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-442328 ssh "cat /opt/local-path-provisioner/pvc-2601b9e6-af89-4f79-9a4d-c4aea7149f93_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-442328 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-442328 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-442328 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-442328 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (314.252669ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 18:46:56.673456   13382 out.go:360] Setting OutFile to fd 1 ...
	I1006 18:46:56.673664   13382 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:46:56.673697   13382 out.go:374] Setting ErrFile to fd 2...
	I1006 18:46:56.673717   13382 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:46:56.673996   13382 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 18:46:56.674304   13382 mustload.go:65] Loading cluster: addons-442328
	I1006 18:46:56.674717   13382 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:46:56.674778   13382 addons.go:606] checking whether the cluster is paused
	I1006 18:46:56.674920   13382 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:46:56.674964   13382 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:46:56.675438   13382 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:46:56.691897   13382 ssh_runner.go:195] Run: systemctl --version
	I1006 18:46:56.691947   13382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:46:56.719960   13382 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:46:56.818443   13382 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 18:46:56.818580   13382 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 18:46:56.859376   13382 cri.go:89] found id: "ebe3bd43a396ef825f9ce2383a8f854705eb0fe8cc731998dc8e54caa0c556da"
	I1006 18:46:56.859400   13382 cri.go:89] found id: "4cd182a7b457fdd19f5abf043db3fae987b3a227baa70e7cd0251be02e768d79"
	I1006 18:46:56.859406   13382 cri.go:89] found id: "08ff7320e4ddcb6923c21c744037216228cafd072dc09cfe448a83bc44918154"
	I1006 18:46:56.859415   13382 cri.go:89] found id: "9ca99ca5ed94b679d8bf6fa05d9e992d767dbca537ff3524c1ba82747167969d"
	I1006 18:46:56.859419   13382 cri.go:89] found id: "14f703b6467c68e5f7dedd7a8ccf32a80265d117ab37abf1d2666a32caf5beda"
	I1006 18:46:56.859423   13382 cri.go:89] found id: "6172f7270c3d65c827bb8fa830d763bd379cc07be16d284bc4c59a48ef10709c"
	I1006 18:46:56.859457   13382 cri.go:89] found id: "8b64f9fe4af619af52e8d4111939a9703e02a5d3f0565c752bf3a85d2aee65ec"
	I1006 18:46:56.859460   13382 cri.go:89] found id: "aa4f571114f60f2401085b3f65a45f9b7779dbaf2d7716f39c56f2c69f8f358f"
	I1006 18:46:56.859464   13382 cri.go:89] found id: "e949d4e668015edc95cab568d722d15a1fd49365b917c054c9a337a848a93ac9"
	I1006 18:46:56.859472   13382 cri.go:89] found id: "eaea2302c49adf3c43a2f0f249dc8fe988f23818d829a10e9709a6f20858af7b"
	I1006 18:46:56.859480   13382 cri.go:89] found id: "156dcf2838ea3731822ceba3a07277518b4ba34e9c5891fb49941d98f1c8b0f7"
	I1006 18:46:56.859483   13382 cri.go:89] found id: "f77ff0e3011bc38bba6de70627d082e5a5194d7fefb407f7b46b7b0c832849e8"
	I1006 18:46:56.859487   13382 cri.go:89] found id: "47db75a0747be97ad93e878b9fbe0acf228502f6a1061340fa99f4c190a83619"
	I1006 18:46:56.859490   13382 cri.go:89] found id: "c540f260b155ab1508ef8fdf1254b0816d0e606eb67ae59f3b2feab7ea526b88"
	I1006 18:46:56.859493   13382 cri.go:89] found id: "d982508a1963aa0737bdefbe4ed7bcad12d4779b68c0ee3b75aa5718e1e8cd6f"
	I1006 18:46:56.859499   13382 cri.go:89] found id: "4a454595ca74de219ba83b946bf8f8f21366c2454667c8f983e1248a703aee58"
	I1006 18:46:56.859506   13382 cri.go:89] found id: "1ee286915381b5158fa5a27a330f7e53f554ce0fe7d429c57ec3ea2868775dc6"
	I1006 18:46:56.859524   13382 cri.go:89] found id: "50d122dfe0df6b0e19a330b8d906a26a7160818c3c2342f861b94db61dbde4ce"
	I1006 18:46:56.859535   13382 cri.go:89] found id: "5c527c40e2db3acd1fa72c315739a326754a5efad7aa6b520cc2b554f0eb3c21"
	I1006 18:46:56.859539   13382 cri.go:89] found id: "8f11330482798f65fa8bce0a640073ba69894daff8d3b583e82b1d331b4098ea"
	I1006 18:46:56.859545   13382 cri.go:89] found id: "4b78f6f126d3c9aae8097883cf29eb9096f58eed0eade4dc15720c72d0617907"
	I1006 18:46:56.859552   13382 cri.go:89] found id: "c436c27fc179e1d50b93c6d7a510f2ccc24e197558a2826868724e75f1d2d96f"
	I1006 18:46:56.859555   13382 cri.go:89] found id: "9bb11c78f552514a88dbee09642e0026e29ed4e8e4982ce305d891d657f27d0b"
	I1006 18:46:56.859558   13382 cri.go:89] found id: ""
	I1006 18:46:56.859626   13382 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 18:46:56.894614   13382 out.go:203] 
	W1006 18:46:56.898302   13382 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:46:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:46:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1006 18:46:56.898339   13382 out.go:285] * 
	* 
	W1006 18:46:56.902630   13382 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 18:46:56.906532   13382 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-442328 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.41s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-2ptdw" [9a959e36-17ef-49b2-8c50-36b639bbbbf7] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003869598s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-442328 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-442328 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (249.707714ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 18:47:08.225043   13549 out.go:360] Setting OutFile to fd 1 ...
	I1006 18:47:08.225195   13549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:47:08.225204   13549 out.go:374] Setting ErrFile to fd 2...
	I1006 18:47:08.225209   13549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:47:08.225454   13549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 18:47:08.225711   13549 mustload.go:65] Loading cluster: addons-442328
	I1006 18:47:08.226054   13549 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:47:08.226072   13549 addons.go:606] checking whether the cluster is paused
	I1006 18:47:08.226172   13549 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:47:08.226188   13549 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:47:08.226606   13549 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:47:08.244986   13549 ssh_runner.go:195] Run: systemctl --version
	I1006 18:47:08.245043   13549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:47:08.261760   13549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:47:08.357930   13549 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 18:47:08.358014   13549 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 18:47:08.387842   13549 cri.go:89] found id: "ebe3bd43a396ef825f9ce2383a8f854705eb0fe8cc731998dc8e54caa0c556da"
	I1006 18:47:08.387864   13549 cri.go:89] found id: "4cd182a7b457fdd19f5abf043db3fae987b3a227baa70e7cd0251be02e768d79"
	I1006 18:47:08.387869   13549 cri.go:89] found id: "08ff7320e4ddcb6923c21c744037216228cafd072dc09cfe448a83bc44918154"
	I1006 18:47:08.387873   13549 cri.go:89] found id: "9ca99ca5ed94b679d8bf6fa05d9e992d767dbca537ff3524c1ba82747167969d"
	I1006 18:47:08.387881   13549 cri.go:89] found id: "14f703b6467c68e5f7dedd7a8ccf32a80265d117ab37abf1d2666a32caf5beda"
	I1006 18:47:08.387885   13549 cri.go:89] found id: "6172f7270c3d65c827bb8fa830d763bd379cc07be16d284bc4c59a48ef10709c"
	I1006 18:47:08.387888   13549 cri.go:89] found id: "8b64f9fe4af619af52e8d4111939a9703e02a5d3f0565c752bf3a85d2aee65ec"
	I1006 18:47:08.387892   13549 cri.go:89] found id: "aa4f571114f60f2401085b3f65a45f9b7779dbaf2d7716f39c56f2c69f8f358f"
	I1006 18:47:08.387895   13549 cri.go:89] found id: "e949d4e668015edc95cab568d722d15a1fd49365b917c054c9a337a848a93ac9"
	I1006 18:47:08.387901   13549 cri.go:89] found id: "eaea2302c49adf3c43a2f0f249dc8fe988f23818d829a10e9709a6f20858af7b"
	I1006 18:47:08.387904   13549 cri.go:89] found id: "156dcf2838ea3731822ceba3a07277518b4ba34e9c5891fb49941d98f1c8b0f7"
	I1006 18:47:08.387907   13549 cri.go:89] found id: "f77ff0e3011bc38bba6de70627d082e5a5194d7fefb407f7b46b7b0c832849e8"
	I1006 18:47:08.387910   13549 cri.go:89] found id: "47db75a0747be97ad93e878b9fbe0acf228502f6a1061340fa99f4c190a83619"
	I1006 18:47:08.387913   13549 cri.go:89] found id: "c540f260b155ab1508ef8fdf1254b0816d0e606eb67ae59f3b2feab7ea526b88"
	I1006 18:47:08.387917   13549 cri.go:89] found id: "d982508a1963aa0737bdefbe4ed7bcad12d4779b68c0ee3b75aa5718e1e8cd6f"
	I1006 18:47:08.387922   13549 cri.go:89] found id: "4a454595ca74de219ba83b946bf8f8f21366c2454667c8f983e1248a703aee58"
	I1006 18:47:08.387925   13549 cri.go:89] found id: "1ee286915381b5158fa5a27a330f7e53f554ce0fe7d429c57ec3ea2868775dc6"
	I1006 18:47:08.387929   13549 cri.go:89] found id: "50d122dfe0df6b0e19a330b8d906a26a7160818c3c2342f861b94db61dbde4ce"
	I1006 18:47:08.387932   13549 cri.go:89] found id: "5c527c40e2db3acd1fa72c315739a326754a5efad7aa6b520cc2b554f0eb3c21"
	I1006 18:47:08.387935   13549 cri.go:89] found id: "8f11330482798f65fa8bce0a640073ba69894daff8d3b583e82b1d331b4098ea"
	I1006 18:47:08.387939   13549 cri.go:89] found id: "4b78f6f126d3c9aae8097883cf29eb9096f58eed0eade4dc15720c72d0617907"
	I1006 18:47:08.387946   13549 cri.go:89] found id: "c436c27fc179e1d50b93c6d7a510f2ccc24e197558a2826868724e75f1d2d96f"
	I1006 18:47:08.387949   13549 cri.go:89] found id: "9bb11c78f552514a88dbee09642e0026e29ed4e8e4982ce305d891d657f27d0b"
	I1006 18:47:08.387952   13549 cri.go:89] found id: ""
	I1006 18:47:08.388004   13549 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 18:47:08.409348   13549 out.go:203] 
	W1006 18:47:08.412201   13549 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:47:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:47:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1006 18:47:08.412226   13549 out.go:285] * 
	* 
	W1006 18:47:08.416025   13549 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 18:47:08.418818   13549 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-442328 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-ffptf" [b05f2f88-a2ee-42d9-9c6d-f929eeb079da] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005113204s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-442328 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-442328 addons disable yakd --alsologtostderr -v=1: exit status 11 (250.715817ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 18:47:02.971528   13488 out.go:360] Setting OutFile to fd 1 ...
	I1006 18:47:02.971745   13488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:47:02.971758   13488 out.go:374] Setting ErrFile to fd 2...
	I1006 18:47:02.971824   13488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:47:02.972435   13488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 18:47:02.972783   13488 mustload.go:65] Loading cluster: addons-442328
	I1006 18:47:02.973193   13488 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:47:02.973237   13488 addons.go:606] checking whether the cluster is paused
	I1006 18:47:02.973366   13488 config.go:182] Loaded profile config "addons-442328": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:47:02.973406   13488 host.go:66] Checking if "addons-442328" exists ...
	I1006 18:47:02.973915   13488 cli_runner.go:164] Run: docker container inspect addons-442328 --format={{.State.Status}}
	I1006 18:47:02.991337   13488 ssh_runner.go:195] Run: systemctl --version
	I1006 18:47:02.991396   13488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-442328
	I1006 18:47:03.009438   13488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/addons-442328/id_rsa Username:docker}
	I1006 18:47:03.106265   13488 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 18:47:03.106355   13488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 18:47:03.138879   13488 cri.go:89] found id: "ebe3bd43a396ef825f9ce2383a8f854705eb0fe8cc731998dc8e54caa0c556da"
	I1006 18:47:03.138898   13488 cri.go:89] found id: "4cd182a7b457fdd19f5abf043db3fae987b3a227baa70e7cd0251be02e768d79"
	I1006 18:47:03.138906   13488 cri.go:89] found id: "08ff7320e4ddcb6923c21c744037216228cafd072dc09cfe448a83bc44918154"
	I1006 18:47:03.138911   13488 cri.go:89] found id: "9ca99ca5ed94b679d8bf6fa05d9e992d767dbca537ff3524c1ba82747167969d"
	I1006 18:47:03.138915   13488 cri.go:89] found id: "14f703b6467c68e5f7dedd7a8ccf32a80265d117ab37abf1d2666a32caf5beda"
	I1006 18:47:03.138919   13488 cri.go:89] found id: "6172f7270c3d65c827bb8fa830d763bd379cc07be16d284bc4c59a48ef10709c"
	I1006 18:47:03.138922   13488 cri.go:89] found id: "8b64f9fe4af619af52e8d4111939a9703e02a5d3f0565c752bf3a85d2aee65ec"
	I1006 18:47:03.138925   13488 cri.go:89] found id: "aa4f571114f60f2401085b3f65a45f9b7779dbaf2d7716f39c56f2c69f8f358f"
	I1006 18:47:03.138928   13488 cri.go:89] found id: "e949d4e668015edc95cab568d722d15a1fd49365b917c054c9a337a848a93ac9"
	I1006 18:47:03.138934   13488 cri.go:89] found id: "eaea2302c49adf3c43a2f0f249dc8fe988f23818d829a10e9709a6f20858af7b"
	I1006 18:47:03.138937   13488 cri.go:89] found id: "156dcf2838ea3731822ceba3a07277518b4ba34e9c5891fb49941d98f1c8b0f7"
	I1006 18:47:03.138940   13488 cri.go:89] found id: "f77ff0e3011bc38bba6de70627d082e5a5194d7fefb407f7b46b7b0c832849e8"
	I1006 18:47:03.138943   13488 cri.go:89] found id: "47db75a0747be97ad93e878b9fbe0acf228502f6a1061340fa99f4c190a83619"
	I1006 18:47:03.138946   13488 cri.go:89] found id: "c540f260b155ab1508ef8fdf1254b0816d0e606eb67ae59f3b2feab7ea526b88"
	I1006 18:47:03.138950   13488 cri.go:89] found id: "d982508a1963aa0737bdefbe4ed7bcad12d4779b68c0ee3b75aa5718e1e8cd6f"
	I1006 18:47:03.138954   13488 cri.go:89] found id: "4a454595ca74de219ba83b946bf8f8f21366c2454667c8f983e1248a703aee58"
	I1006 18:47:03.138957   13488 cri.go:89] found id: "1ee286915381b5158fa5a27a330f7e53f554ce0fe7d429c57ec3ea2868775dc6"
	I1006 18:47:03.138960   13488 cri.go:89] found id: "50d122dfe0df6b0e19a330b8d906a26a7160818c3c2342f861b94db61dbde4ce"
	I1006 18:47:03.138964   13488 cri.go:89] found id: "5c527c40e2db3acd1fa72c315739a326754a5efad7aa6b520cc2b554f0eb3c21"
	I1006 18:47:03.138967   13488 cri.go:89] found id: "8f11330482798f65fa8bce0a640073ba69894daff8d3b583e82b1d331b4098ea"
	I1006 18:47:03.138972   13488 cri.go:89] found id: "4b78f6f126d3c9aae8097883cf29eb9096f58eed0eade4dc15720c72d0617907"
	I1006 18:47:03.138975   13488 cri.go:89] found id: "c436c27fc179e1d50b93c6d7a510f2ccc24e197558a2826868724e75f1d2d96f"
	I1006 18:47:03.138978   13488 cri.go:89] found id: "9bb11c78f552514a88dbee09642e0026e29ed4e8e4982ce305d891d657f27d0b"
	I1006 18:47:03.138981   13488 cri.go:89] found id: ""
	I1006 18:47:03.139029   13488 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 18:47:03.154350   13488 out.go:203] 
	W1006 18:47:03.157375   13488 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:47:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:47:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1006 18:47:03.157405   13488 out.go:285] * 
	* 
	W1006 18:47:03.161285   13488 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 18:47:03.164174   13488 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-442328 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.26s)

                                                
                                    
x
+
TestForceSystemdFlag (513.67s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-203169 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-203169 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 80 (8m29.857207104s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-203169] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-203169" primary control-plane node in "force-systemd-flag-203169" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 19:39:19.789648  168504 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:39:19.789877  168504 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:39:19.789904  168504 out.go:374] Setting ErrFile to fd 2...
	I1006 19:39:19.789923  168504 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:39:19.790215  168504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:39:19.790738  168504 out.go:368] Setting JSON to false
	I1006 19:39:19.791812  168504 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4895,"bootTime":1759774665,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1006 19:39:19.791903  168504 start.go:140] virtualization:  
	I1006 19:39:19.798153  168504 out.go:179] * [force-systemd-flag-203169] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 19:39:19.801720  168504 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 19:39:19.801803  168504 notify.go:220] Checking for updates...
	I1006 19:39:19.808397  168504 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 19:39:19.811496  168504 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:39:19.814774  168504 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	I1006 19:39:19.818048  168504 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 19:39:19.821097  168504 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 19:39:19.824564  168504 config.go:182] Loaded profile config "kubernetes-upgrade-977990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:39:19.824672  168504 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 19:39:19.851742  168504 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 19:39:19.851882  168504 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:39:19.906783  168504 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:39:19.897762037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:39:19.906887  168504 docker.go:318] overlay module found
	I1006 19:39:19.910128  168504 out.go:179] * Using the docker driver based on user configuration
	I1006 19:39:19.913365  168504 start.go:304] selected driver: docker
	I1006 19:39:19.913390  168504 start.go:924] validating driver "docker" against <nil>
	I1006 19:39:19.913405  168504 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 19:39:19.914137  168504 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:39:19.968802  168504 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:39:19.960035833 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:39:19.968960  168504 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 19:39:19.969190  168504 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1006 19:39:19.972259  168504 out.go:179] * Using Docker driver with root privileges
	I1006 19:39:19.975047  168504 cni.go:84] Creating CNI manager for ""
	I1006 19:39:19.975113  168504 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:39:19.975126  168504 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 19:39:19.975204  168504 start.go:348] cluster config:
	{Name:force-systemd-flag-203169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-203169 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:39:19.978353  168504 out.go:179] * Starting "force-systemd-flag-203169" primary control-plane node in "force-systemd-flag-203169" cluster
	I1006 19:39:19.981099  168504 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 19:39:19.984198  168504 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 19:39:19.986918  168504 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:39:19.986974  168504 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1006 19:39:19.986986  168504 cache.go:58] Caching tarball of preloaded images
	I1006 19:39:19.987015  168504 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 19:39:19.987065  168504 preload.go:233] Found /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1006 19:39:19.987075  168504 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 19:39:19.987188  168504 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/config.json ...
	I1006 19:39:19.987206  168504 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/config.json: {Name:mk18d47c645ca79714d888c31b5630f18d556aca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:39:20.009325  168504 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 19:39:20.009346  168504 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 19:39:20.009372  168504 cache.go:232] Successfully downloaded all kic artifacts
	I1006 19:39:20.009394  168504 start.go:360] acquireMachinesLock for force-systemd-flag-203169: {Name:mk7bf868e8c3610bedd3ba0fe6b7a4b3394a6608 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 19:39:20.009502  168504 start.go:364] duration metric: took 92.735µs to acquireMachinesLock for "force-systemd-flag-203169"
	I1006 19:39:20.009531  168504 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-203169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-203169 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 19:39:20.009600  168504 start.go:125] createHost starting for "" (driver="docker")
	I1006 19:39:20.013713  168504 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1006 19:39:20.013978  168504 start.go:159] libmachine.API.Create for "force-systemd-flag-203169" (driver="docker")
	I1006 19:39:20.014023  168504 client.go:168] LocalClient.Create starting
	I1006 19:39:20.014123  168504 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem
	I1006 19:39:20.014158  168504 main.go:141] libmachine: Decoding PEM data...
	I1006 19:39:20.014172  168504 main.go:141] libmachine: Parsing certificate...
	I1006 19:39:20.014230  168504 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem
	I1006 19:39:20.014246  168504 main.go:141] libmachine: Decoding PEM data...
	I1006 19:39:20.014256  168504 main.go:141] libmachine: Parsing certificate...
	I1006 19:39:20.020128  168504 cli_runner.go:164] Run: docker network inspect force-systemd-flag-203169 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 19:39:20.042077  168504 cli_runner.go:211] docker network inspect force-systemd-flag-203169 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 19:39:20.042182  168504 network_create.go:284] running [docker network inspect force-systemd-flag-203169] to gather additional debugging logs...
	I1006 19:39:20.042206  168504 cli_runner.go:164] Run: docker network inspect force-systemd-flag-203169
	W1006 19:39:20.059502  168504 cli_runner.go:211] docker network inspect force-systemd-flag-203169 returned with exit code 1
	I1006 19:39:20.059536  168504 network_create.go:287] error running [docker network inspect force-systemd-flag-203169]: docker network inspect force-systemd-flag-203169: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-203169 not found
	I1006 19:39:20.059551  168504 network_create.go:289] output of [docker network inspect force-systemd-flag-203169]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-203169 not found
	
	** /stderr **
	I1006 19:39:20.059755  168504 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:39:20.076939  168504 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7058eae896da IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:57:c0:bd:de:1b} reservation:<nil>}
	I1006 19:39:20.077226  168504 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e321ce8cf6dc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:12:e6:76:87:fe:bb} reservation:<nil>}
	I1006 19:39:20.077532  168504 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-17bdbd7bd7be IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:22:93:f3:19:55:10} reservation:<nil>}
	I1006 19:39:20.077841  168504 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-54ee9ab47b05 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3a:a1:f6:00:f7:86} reservation:<nil>}
	I1006 19:39:20.078263  168504 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a56310}
	I1006 19:39:20.078285  168504 network_create.go:124] attempt to create docker network force-systemd-flag-203169 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1006 19:39:20.078354  168504 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-203169 force-systemd-flag-203169
	I1006 19:39:20.140215  168504 network_create.go:108] docker network force-systemd-flag-203169 192.168.85.0/24 created
	I1006 19:39:20.140262  168504 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-203169" container
	I1006 19:39:20.140336  168504 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 19:39:20.156771  168504 cli_runner.go:164] Run: docker volume create force-systemd-flag-203169 --label name.minikube.sigs.k8s.io=force-systemd-flag-203169 --label created_by.minikube.sigs.k8s.io=true
	I1006 19:39:20.175364  168504 oci.go:103] Successfully created a docker volume force-systemd-flag-203169
	I1006 19:39:20.175459  168504 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-203169-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-203169 --entrypoint /usr/bin/test -v force-systemd-flag-203169:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 19:39:20.699939  168504 oci.go:107] Successfully prepared a docker volume force-systemd-flag-203169
	I1006 19:39:20.699986  168504 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:39:20.700005  168504 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 19:39:20.700082  168504 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-203169:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 19:39:25.190490  168504 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-203169:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.490373825s)
	I1006 19:39:25.190522  168504 kic.go:203] duration metric: took 4.490514151s to extract preloaded images to volume ...
	W1006 19:39:25.190676  168504 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1006 19:39:25.190786  168504 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 19:39:25.254359  168504 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-203169 --name force-systemd-flag-203169 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-203169 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-203169 --network force-systemd-flag-203169 --ip 192.168.85.2 --volume force-systemd-flag-203169:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 19:39:25.559220  168504 cli_runner.go:164] Run: docker container inspect force-systemd-flag-203169 --format={{.State.Running}}
	I1006 19:39:25.581732  168504 cli_runner.go:164] Run: docker container inspect force-systemd-flag-203169 --format={{.State.Status}}
	I1006 19:39:25.612128  168504 cli_runner.go:164] Run: docker exec force-systemd-flag-203169 stat /var/lib/dpkg/alternatives/iptables
	I1006 19:39:25.666027  168504 oci.go:144] the created container "force-systemd-flag-203169" has a running status.
	I1006 19:39:25.666059  168504 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/force-systemd-flag-203169/id_rsa...
	I1006 19:39:26.244282  168504 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/force-systemd-flag-203169/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1006 19:39:26.244336  168504 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-2540/.minikube/machines/force-systemd-flag-203169/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 19:39:26.263271  168504 cli_runner.go:164] Run: docker container inspect force-systemd-flag-203169 --format={{.State.Status}}
	I1006 19:39:26.285086  168504 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 19:39:26.285107  168504 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-203169 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 19:39:26.328907  168504 cli_runner.go:164] Run: docker container inspect force-systemd-flag-203169 --format={{.State.Status}}
	I1006 19:39:26.347799  168504 machine.go:93] provisionDockerMachine start ...
	I1006 19:39:26.347897  168504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-203169
	I1006 19:39:26.369336  168504 main.go:141] libmachine: Using SSH client type: native
	I1006 19:39:26.369681  168504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33030 <nil> <nil>}
	I1006 19:39:26.369691  168504 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 19:39:26.370402  168504 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1006 19:39:29.503601  168504 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-203169
	
	I1006 19:39:29.503626  168504 ubuntu.go:182] provisioning hostname "force-systemd-flag-203169"
	I1006 19:39:29.503737  168504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-203169
	I1006 19:39:29.520750  168504 main.go:141] libmachine: Using SSH client type: native
	I1006 19:39:29.521071  168504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33030 <nil> <nil>}
	I1006 19:39:29.521088  168504 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-203169 && echo "force-systemd-flag-203169" | sudo tee /etc/hostname
	I1006 19:39:29.660837  168504 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-203169
	
	I1006 19:39:29.660966  168504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-203169
	I1006 19:39:29.678181  168504 main.go:141] libmachine: Using SSH client type: native
	I1006 19:39:29.678490  168504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33030 <nil> <nil>}
	I1006 19:39:29.678507  168504 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-203169' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-203169/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-203169' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 19:39:29.807758  168504 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 19:39:29.807784  168504 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-2540/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-2540/.minikube}
	I1006 19:39:29.807803  168504 ubuntu.go:190] setting up certificates
	I1006 19:39:29.807870  168504 provision.go:84] configureAuth start
	I1006 19:39:29.807950  168504 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-203169
	I1006 19:39:29.824885  168504 provision.go:143] copyHostCerts
	I1006 19:39:29.824927  168504 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem
	I1006 19:39:29.824961  168504 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem, removing ...
	I1006 19:39:29.824974  168504 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem
	I1006 19:39:29.825060  168504 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem (1082 bytes)
	I1006 19:39:29.825148  168504 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem
	I1006 19:39:29.825170  168504 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem, removing ...
	I1006 19:39:29.825178  168504 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem
	I1006 19:39:29.825205  168504 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem (1123 bytes)
	I1006 19:39:29.825251  168504 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem
	I1006 19:39:29.825272  168504 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem, removing ...
	I1006 19:39:29.825279  168504 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem
	I1006 19:39:29.825303  168504 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem (1675 bytes)
	I1006 19:39:29.825355  168504 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-203169 san=[127.0.0.1 192.168.85.2 force-systemd-flag-203169 localhost minikube]
	I1006 19:39:30.415417  168504 provision.go:177] copyRemoteCerts
	I1006 19:39:30.415467  168504 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 19:39:30.415507  168504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-203169
	I1006 19:39:30.434836  168504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/force-systemd-flag-203169/id_rsa Username:docker}
	I1006 19:39:30.546244  168504 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 19:39:30.546306  168504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 19:39:30.571591  168504 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 19:39:30.571653  168504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1006 19:39:30.601143  168504 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 19:39:30.601251  168504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 19:39:30.625767  168504 provision.go:87] duration metric: took 817.870761ms to configureAuth
	I1006 19:39:30.625793  168504 ubuntu.go:206] setting minikube options for container-runtime
	I1006 19:39:30.625979  168504 config.go:182] Loaded profile config "force-systemd-flag-203169": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:39:30.626088  168504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-203169
	I1006 19:39:30.646436  168504 main.go:141] libmachine: Using SSH client type: native
	I1006 19:39:30.646729  168504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33030 <nil> <nil>}
	I1006 19:39:30.646744  168504 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 19:39:30.936061  168504 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 19:39:30.936086  168504 machine.go:96] duration metric: took 4.588268406s to provisionDockerMachine
	I1006 19:39:30.936097  168504 client.go:171] duration metric: took 10.922067722s to LocalClient.Create
	I1006 19:39:30.936110  168504 start.go:167] duration metric: took 10.922134601s to libmachine.API.Create "force-systemd-flag-203169"
	I1006 19:39:30.936119  168504 start.go:293] postStartSetup for "force-systemd-flag-203169" (driver="docker")
	I1006 19:39:30.936132  168504 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 19:39:30.936196  168504 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 19:39:30.936238  168504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-203169
	I1006 19:39:30.962305  168504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/force-systemd-flag-203169/id_rsa Username:docker}
	I1006 19:39:31.067930  168504 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 19:39:31.071660  168504 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 19:39:31.071716  168504 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 19:39:31.071728  168504 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/addons for local assets ...
	I1006 19:39:31.071799  168504 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/files for local assets ...
	I1006 19:39:31.071890  168504 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> 43502.pem in /etc/ssl/certs
	I1006 19:39:31.071901  168504 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> /etc/ssl/certs/43502.pem
	I1006 19:39:31.072154  168504 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 19:39:31.079814  168504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:39:31.098526  168504 start.go:296] duration metric: took 162.388583ms for postStartSetup
	I1006 19:39:31.098942  168504 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-203169
	I1006 19:39:31.116268  168504 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/config.json ...
	I1006 19:39:31.116574  168504 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 19:39:31.116624  168504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-203169
	I1006 19:39:31.134337  168504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/force-systemd-flag-203169/id_rsa Username:docker}
	I1006 19:39:31.228786  168504 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 19:39:31.233585  168504 start.go:128] duration metric: took 11.223970983s to createHost
	I1006 19:39:31.233606  168504 start.go:83] releasing machines lock for "force-systemd-flag-203169", held for 11.224095251s
	I1006 19:39:31.233676  168504 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-203169
	I1006 19:39:31.251164  168504 ssh_runner.go:195] Run: cat /version.json
	I1006 19:39:31.251216  168504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-203169
	I1006 19:39:31.251229  168504 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 19:39:31.251291  168504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-203169
	I1006 19:39:31.268394  168504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/force-systemd-flag-203169/id_rsa Username:docker}
	I1006 19:39:31.289397  168504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/force-systemd-flag-203169/id_rsa Username:docker}
	I1006 19:39:31.363286  168504 ssh_runner.go:195] Run: systemctl --version
	I1006 19:39:31.455340  168504 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 19:39:31.498801  168504 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 19:39:31.503641  168504 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 19:39:31.503814  168504 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 19:39:31.531801  168504 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1006 19:39:31.531874  168504 start.go:495] detecting cgroup driver to use...
	I1006 19:39:31.531902  168504 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1006 19:39:31.531989  168504 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 19:39:31.549465  168504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 19:39:31.562391  168504 docker.go:218] disabling cri-docker service (if available) ...
	I1006 19:39:31.562453  168504 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 19:39:31.580897  168504 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 19:39:31.599565  168504 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 19:39:31.726308  168504 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 19:39:31.860426  168504 docker.go:234] disabling docker service ...
	I1006 19:39:31.860518  168504 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 19:39:31.881798  168504 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 19:39:31.894699  168504 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 19:39:32.018141  168504 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 19:39:32.139735  168504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 19:39:32.152996  168504 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 19:39:32.168389  168504 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 19:39:32.168457  168504 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:39:32.177304  168504 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 19:39:32.177426  168504 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:39:32.186454  168504 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:39:32.195953  168504 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:39:32.205283  168504 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 19:39:32.213650  168504 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:39:32.222062  168504 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:39:32.235141  168504 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:39:32.243887  168504 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 19:39:32.251421  168504 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 19:39:32.259057  168504 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:39:32.368629  168504 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 19:39:32.487765  168504 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 19:39:32.487877  168504 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 19:39:32.491852  168504 start.go:563] Will wait 60s for crictl version
	I1006 19:39:32.491963  168504 ssh_runner.go:195] Run: which crictl
	I1006 19:39:32.495475  168504 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 19:39:32.520553  168504 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 19:39:32.520713  168504 ssh_runner.go:195] Run: crio --version
	I1006 19:39:32.554488  168504 ssh_runner.go:195] Run: crio --version
	I1006 19:39:32.588858  168504 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 19:39:32.591677  168504 cli_runner.go:164] Run: docker network inspect force-systemd-flag-203169 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:39:32.607923  168504 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1006 19:39:32.611946  168504 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:39:32.622020  168504 kubeadm.go:883] updating cluster {Name:force-systemd-flag-203169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-203169 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 19:39:32.622122  168504 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:39:32.622174  168504 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:39:32.655900  168504 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:39:32.655924  168504 crio.go:433] Images already preloaded, skipping extraction
	I1006 19:39:32.655981  168504 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:39:32.680638  168504 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:39:32.680659  168504 cache_images.go:85] Images are preloaded, skipping loading
	I1006 19:39:32.680667  168504 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1006 19:39:32.680748  168504 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-203169 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-203169 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 19:39:32.680833  168504 ssh_runner.go:195] Run: crio config
	I1006 19:39:32.734736  168504 cni.go:84] Creating CNI manager for ""
	I1006 19:39:32.734760  168504 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:39:32.734776  168504 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 19:39:32.734798  168504 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-203169 NodeName:force-systemd-flag-203169 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 19:39:32.734933  168504 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-203169"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 19:39:32.735007  168504 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 19:39:32.742778  168504 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 19:39:32.742852  168504 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 19:39:32.750406  168504 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1006 19:39:32.764239  168504 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 19:39:32.777675  168504 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1006 19:39:32.790839  168504 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1006 19:39:32.795236  168504 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:39:32.805168  168504 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:39:32.912161  168504 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:39:32.928840  168504 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169 for IP: 192.168.85.2
	I1006 19:39:32.928874  168504 certs.go:195] generating shared ca certs ...
	I1006 19:39:32.928892  168504 certs.go:227] acquiring lock for ca certs: {Name:mke29bef1b13829052576090d66d8864f7cbc64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:39:32.929094  168504 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key
	I1006 19:39:32.929174  168504 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key
	I1006 19:39:32.929188  168504 certs.go:257] generating profile certs ...
	I1006 19:39:32.929258  168504 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/client.key
	I1006 19:39:32.929276  168504 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/client.crt with IP's: []
	I1006 19:39:33.768761  168504 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/client.crt ...
	I1006 19:39:33.768793  168504 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/client.crt: {Name:mk75d2e2b60f9b70c2f04ad9646ac7ec64b5cc11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:39:33.769047  168504 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/client.key ...
	I1006 19:39:33.769065  168504 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/client.key: {Name:mk6e71782fbc4e546589856bf12cec573b8e3803 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:39:33.769198  168504 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/apiserver.key.6217772f
	I1006 19:39:33.769220  168504 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/apiserver.crt.6217772f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1006 19:39:34.326663  168504 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/apiserver.crt.6217772f ...
	I1006 19:39:34.326695  168504 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/apiserver.crt.6217772f: {Name:mk2bb88adcaaa177506ba882d4d48302f526f546 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:39:34.326918  168504 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/apiserver.key.6217772f ...
	I1006 19:39:34.326935  168504 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/apiserver.key.6217772f: {Name:mk90e574f97a9d4d8e024571fdf13f3727306e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:39:34.327024  168504 certs.go:382] copying /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/apiserver.crt.6217772f -> /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/apiserver.crt
	I1006 19:39:34.327109  168504 certs.go:386] copying /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/apiserver.key.6217772f -> /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/apiserver.key
	I1006 19:39:34.327172  168504 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/proxy-client.key
	I1006 19:39:34.327190  168504 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/proxy-client.crt with IP's: []
	I1006 19:39:34.776960  168504 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/proxy-client.crt ...
	I1006 19:39:34.776993  168504 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/proxy-client.crt: {Name:mked69f1e2f2079d23a8b095bafae9cd6bdccf50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:39:34.777174  168504 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/proxy-client.key ...
	I1006 19:39:34.777188  168504 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/proxy-client.key: {Name:mkdf2083fcefb9357e17804c9c6eb1b513161206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:39:34.777274  168504 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 19:39:34.777293  168504 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 19:39:34.777305  168504 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 19:39:34.777326  168504 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 19:39:34.777342  168504 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 19:39:34.777359  168504 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 19:39:34.777375  168504 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 19:39:34.777386  168504 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 19:39:34.777440  168504 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem (1338 bytes)
	W1006 19:39:34.777480  168504 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350_empty.pem, impossibly tiny 0 bytes
	I1006 19:39:34.777491  168504 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 19:39:34.777515  168504 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem (1082 bytes)
	I1006 19:39:34.777541  168504 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem (1123 bytes)
	I1006 19:39:34.777566  168504 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem (1675 bytes)
	I1006 19:39:34.777612  168504 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:39:34.777647  168504 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem -> /usr/share/ca-certificates/4350.pem
	I1006 19:39:34.777663  168504 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> /usr/share/ca-certificates/43502.pem
	I1006 19:39:34.777676  168504 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:39:34.778161  168504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 19:39:34.798095  168504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 19:39:34.817037  168504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 19:39:34.835204  168504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 19:39:34.853215  168504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1006 19:39:34.871375  168504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1006 19:39:34.888559  168504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 19:39:34.905699  168504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-flag-203169/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 19:39:34.923393  168504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem --> /usr/share/ca-certificates/4350.pem (1338 bytes)
	I1006 19:39:34.941036  168504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /usr/share/ca-certificates/43502.pem (1708 bytes)
	I1006 19:39:34.958810  168504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 19:39:34.977269  168504 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 19:39:34.990223  168504 ssh_runner.go:195] Run: openssl version
	I1006 19:39:34.997461  168504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4350.pem && ln -fs /usr/share/ca-certificates/4350.pem /etc/ssl/certs/4350.pem"
	I1006 19:39:35.005910  168504 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4350.pem
	I1006 19:39:35.009855  168504 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 18:49 /usr/share/ca-certificates/4350.pem
	I1006 19:39:35.009918  168504 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4350.pem
	I1006 19:39:35.052011  168504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4350.pem /etc/ssl/certs/51391683.0"
	I1006 19:39:35.060848  168504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43502.pem && ln -fs /usr/share/ca-certificates/43502.pem /etc/ssl/certs/43502.pem"
	I1006 19:39:35.069459  168504 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43502.pem
	I1006 19:39:35.073486  168504 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 18:49 /usr/share/ca-certificates/43502.pem
	I1006 19:39:35.073580  168504 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43502.pem
	I1006 19:39:35.115053  168504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43502.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 19:39:35.123633  168504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 19:39:35.132014  168504 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:39:35.135954  168504 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:39:35.136032  168504 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:39:35.176903  168504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 19:39:35.185678  168504 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 19:39:35.189493  168504 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 19:39:35.189548  168504 kubeadm.go:400] StartCluster: {Name:force-systemd-flag-203169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-203169 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:39:35.189618  168504 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 19:39:35.189693  168504 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 19:39:35.217628  168504 cri.go:89] found id: ""
	I1006 19:39:35.217755  168504 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 19:39:35.225889  168504 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 19:39:35.234064  168504 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 19:39:35.234157  168504 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 19:39:35.242320  168504 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 19:39:35.242387  168504 kubeadm.go:157] found existing configuration files:
	
	I1006 19:39:35.242448  168504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 19:39:35.250689  168504 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 19:39:35.250769  168504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 19:39:35.258514  168504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 19:39:35.266716  168504 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 19:39:35.266805  168504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 19:39:35.274966  168504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 19:39:35.283226  168504 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 19:39:35.283297  168504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 19:39:35.291325  168504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 19:39:35.305331  168504 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 19:39:35.305441  168504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 19:39:35.313837  168504 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 19:39:35.362388  168504 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 19:39:35.362795  168504 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 19:39:35.389507  168504 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 19:39:35.389678  168504 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1006 19:39:35.389752  168504 kubeadm.go:318] OS: Linux
	I1006 19:39:35.389830  168504 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 19:39:35.389909  168504 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1006 19:39:35.389991  168504 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 19:39:35.390080  168504 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 19:39:35.390163  168504 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 19:39:35.390250  168504 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 19:39:35.390334  168504 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 19:39:35.390419  168504 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 19:39:35.390501  168504 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1006 19:39:35.460926  168504 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 19:39:35.461110  168504 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 19:39:35.461244  168504 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 19:39:35.472174  168504 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 19:39:35.477809  168504 out.go:252]   - Generating certificates and keys ...
	I1006 19:39:35.477973  168504 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 19:39:35.478080  168504 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 19:39:35.899299  168504 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 19:39:36.270158  168504 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 19:39:37.656566  168504 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 19:39:38.140415  168504 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 19:39:38.782009  168504 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 19:39:38.782365  168504 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-203169 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1006 19:39:39.187805  168504 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 19:39:39.188125  168504 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-203169 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1006 19:39:39.319936  168504 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 19:39:39.635467  168504 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 19:39:40.517762  168504 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 19:39:40.517837  168504 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 19:39:41.180386  168504 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 19:39:41.365856  168504 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 19:39:41.822958  168504 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 19:39:42.240664  168504 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 19:39:42.443186  168504 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 19:39:42.443904  168504 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 19:39:42.446496  168504 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 19:39:42.450044  168504 out.go:252]   - Booting up control plane ...
	I1006 19:39:42.450152  168504 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 19:39:42.450238  168504 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 19:39:42.450311  168504 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 19:39:42.465455  168504 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 19:39:42.465568  168504 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 19:39:42.473560  168504 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 19:39:42.473943  168504 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 19:39:42.474206  168504 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 19:39:42.605699  168504 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 19:39:42.605827  168504 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 19:39:45.607613  168504 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 3.001969282s
	I1006 19:39:45.611692  168504 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 19:39:45.611842  168504 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1006 19:39:45.611959  168504 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 19:39:45.612068  168504 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 19:43:45.612712  168504 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000956017s
	I1006 19:43:45.612845  168504 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001041146s
	I1006 19:43:45.613693  168504 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000990122s
	I1006 19:43:45.613822  168504 kubeadm.go:318] 
	I1006 19:43:45.613929  168504 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 19:43:45.614028  168504 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 19:43:45.614342  168504 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 19:43:45.614457  168504 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 19:43:45.614610  168504 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 19:43:45.614761  168504 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 19:43:45.614790  168504 kubeadm.go:318] 
	I1006 19:43:45.620169  168504 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1006 19:43:45.620410  168504 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1006 19:43:45.620521  168504 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 19:43:45.621104  168504 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 19:43:45.621176  168504 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1006 19:43:45.621308  168504 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-203169 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-203169 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 3.001969282s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000956017s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001041146s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000990122s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-203169 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-203169 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 3.001969282s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000956017s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001041146s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000990122s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1006 19:43:45.621386  168504 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1006 19:43:46.127928  168504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:43:46.141141  168504 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 19:43:46.141209  168504 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 19:43:46.149317  168504 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 19:43:46.149339  168504 kubeadm.go:157] found existing configuration files:
	
	I1006 19:43:46.149389  168504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 19:43:46.157329  168504 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 19:43:46.157393  168504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 19:43:46.165142  168504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 19:43:46.173158  168504 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 19:43:46.173234  168504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 19:43:46.181037  168504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 19:43:46.189206  168504 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 19:43:46.189277  168504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 19:43:46.197104  168504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 19:43:46.205450  168504 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 19:43:46.205519  168504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 19:43:46.213647  168504 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 19:43:46.254264  168504 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 19:43:46.254471  168504 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 19:43:46.287961  168504 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 19:43:46.288033  168504 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1006 19:43:46.288071  168504 kubeadm.go:318] OS: Linux
	I1006 19:43:46.288119  168504 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 19:43:46.288170  168504 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1006 19:43:46.288220  168504 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 19:43:46.288274  168504 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 19:43:46.288324  168504 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 19:43:46.288375  168504 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 19:43:46.288422  168504 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 19:43:46.288486  168504 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 19:43:46.288535  168504 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1006 19:43:46.358617  168504 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 19:43:46.358726  168504 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 19:43:46.358817  168504 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 19:43:46.370288  168504 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 19:43:46.377053  168504 out.go:252]   - Generating certificates and keys ...
	I1006 19:43:46.377146  168504 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 19:43:46.377216  168504 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 19:43:46.377299  168504 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1006 19:43:46.377397  168504 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1006 19:43:46.377556  168504 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1006 19:43:46.377624  168504 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1006 19:43:46.377692  168504 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1006 19:43:46.377779  168504 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1006 19:43:46.377860  168504 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1006 19:43:46.378138  168504 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1006 19:43:46.378381  168504 kubeadm.go:318] [certs] Using the existing "sa" key
	I1006 19:43:46.378445  168504 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 19:43:46.573987  168504 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 19:43:46.702887  168504 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 19:43:47.063322  168504 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 19:43:47.456922  168504 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 19:43:47.901735  168504 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 19:43:47.902964  168504 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 19:43:47.907003  168504 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 19:43:47.910295  168504 out.go:252]   - Booting up control plane ...
	I1006 19:43:47.910396  168504 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 19:43:47.910473  168504 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 19:43:47.910540  168504 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 19:43:47.925029  168504 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 19:43:47.925767  168504 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 19:43:47.933355  168504 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 19:43:47.933664  168504 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 19:43:47.933722  168504 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 19:43:48.081873  168504 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 19:43:48.081997  168504 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 19:43:49.082616  168504 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000838819s
	I1006 19:43:49.088801  168504 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 19:43:49.088905  168504 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1006 19:43:49.089021  168504 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 19:43:49.089127  168504 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 19:47:49.087468  168504 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00036164s
	I1006 19:47:49.087637  168504 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000196379s
	I1006 19:47:49.088608  168504 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000819536s
	I1006 19:47:49.088634  168504 kubeadm.go:318] 
	I1006 19:47:49.088803  168504 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 19:47:49.088951  168504 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 19:47:49.089112  168504 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 19:47:49.089286  168504 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 19:47:49.089591  168504 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 19:47:49.089744  168504 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 19:47:49.089750  168504 kubeadm.go:318] 
	I1006 19:47:49.094209  168504 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1006 19:47:49.094556  168504 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1006 19:47:49.094712  168504 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 19:47:49.095472  168504 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 19:47:49.095572  168504 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 19:47:49.095646  168504 kubeadm.go:402] duration metric: took 8m13.906100786s to StartCluster
	I1006 19:47:49.095722  168504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 19:47:49.095790  168504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 19:47:49.122017  168504 cri.go:89] found id: ""
	I1006 19:47:49.122048  168504 logs.go:282] 0 containers: []
	W1006 19:47:49.122057  168504 logs.go:284] No container was found matching "kube-apiserver"
	I1006 19:47:49.122064  168504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 19:47:49.122122  168504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 19:47:49.149475  168504 cri.go:89] found id: ""
	I1006 19:47:49.149498  168504 logs.go:282] 0 containers: []
	W1006 19:47:49.149507  168504 logs.go:284] No container was found matching "etcd"
	I1006 19:47:49.149513  168504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 19:47:49.149579  168504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 19:47:49.175030  168504 cri.go:89] found id: ""
	I1006 19:47:49.175052  168504 logs.go:282] 0 containers: []
	W1006 19:47:49.175061  168504 logs.go:284] No container was found matching "coredns"
	I1006 19:47:49.175068  168504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 19:47:49.175127  168504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 19:47:49.200883  168504 cri.go:89] found id: ""
	I1006 19:47:49.200904  168504 logs.go:282] 0 containers: []
	W1006 19:47:49.200913  168504 logs.go:284] No container was found matching "kube-scheduler"
	I1006 19:47:49.200919  168504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 19:47:49.200980  168504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 19:47:49.226947  168504 cri.go:89] found id: ""
	I1006 19:47:49.226971  168504 logs.go:282] 0 containers: []
	W1006 19:47:49.226980  168504 logs.go:284] No container was found matching "kube-proxy"
	I1006 19:47:49.227001  168504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 19:47:49.227063  168504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 19:47:49.253216  168504 cri.go:89] found id: ""
	I1006 19:47:49.253241  168504 logs.go:282] 0 containers: []
	W1006 19:47:49.253249  168504 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 19:47:49.253256  168504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 19:47:49.253315  168504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 19:47:49.284772  168504 cri.go:89] found id: ""
	I1006 19:47:49.284794  168504 logs.go:282] 0 containers: []
	W1006 19:47:49.284803  168504 logs.go:284] No container was found matching "kindnet"
	I1006 19:47:49.284812  168504 logs.go:123] Gathering logs for kubelet ...
	I1006 19:47:49.284823  168504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 19:47:49.375258  168504 logs.go:123] Gathering logs for dmesg ...
	I1006 19:47:49.375291  168504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 19:47:49.390955  168504 logs.go:123] Gathering logs for describe nodes ...
	I1006 19:47:49.390984  168504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 19:47:49.463091  168504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 19:47:49.454380    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:47:49.455411    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:47:49.456744    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:47:49.457362    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:47:49.459020    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 19:47:49.454380    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:47:49.455411    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:47:49.456744    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:47:49.457362    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:47:49.459020    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 19:47:49.463112  168504 logs.go:123] Gathering logs for CRI-O ...
	I1006 19:47:49.463126  168504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 19:47:49.538724  168504 logs.go:123] Gathering logs for container status ...
	I1006 19:47:49.538764  168504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1006 19:47:49.573145  168504 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000838819s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.00036164s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000196379s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000819536s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1006 19:47:49.573217  168504 out.go:285] * 
	* 
	W1006 19:47:49.573285  168504 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000838819s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.00036164s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000196379s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000819536s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000838819s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.00036164s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000196379s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000819536s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 19:47:49.573312  168504 out.go:285] * 
	* 
	W1006 19:47:49.576106  168504 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 19:47:49.583442  168504 out.go:203] 
	W1006 19:47:49.586296  168504 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000838819s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.00036164s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000196379s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000819536s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000838819s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.00036164s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000196379s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000819536s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 19:47:49.586322  168504 out.go:285] * 
	* 
	I1006 19:47:49.591381  168504 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-203169 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 80
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-203169 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-10-06 19:47:49.956242913 +0000 UTC m=+3979.475309993
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect force-systemd-flag-203169
helpers_test.go:243: (dbg) docker inspect force-systemd-flag-203169:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23e665ad7ab12f28e961cf62bc11cba7cb3e84c5376709a9a345a00a62a9bc69",
	        "Created": "2025-10-06T19:39:25.269243339Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 169106,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T19:39:25.330429463Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/23e665ad7ab12f28e961cf62bc11cba7cb3e84c5376709a9a345a00a62a9bc69/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23e665ad7ab12f28e961cf62bc11cba7cb3e84c5376709a9a345a00a62a9bc69/hostname",
	        "HostsPath": "/var/lib/docker/containers/23e665ad7ab12f28e961cf62bc11cba7cb3e84c5376709a9a345a00a62a9bc69/hosts",
	        "LogPath": "/var/lib/docker/containers/23e665ad7ab12f28e961cf62bc11cba7cb3e84c5376709a9a345a00a62a9bc69/23e665ad7ab12f28e961cf62bc11cba7cb3e84c5376709a9a345a00a62a9bc69-json.log",
	        "Name": "/force-systemd-flag-203169",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-203169:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-203169",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "23e665ad7ab12f28e961cf62bc11cba7cb3e84c5376709a9a345a00a62a9bc69",
	                "LowerDir": "/var/lib/docker/overlay2/53a9f31c7221bec8b80f37e0289b6756485369937133a44e2d9cce8762a2d435-init/diff:/var/lib/docker/overlay2/950dad8a7486c25883a9845454dded3949e8f3a53166d005e6749ff210bcab80/diff",
	                "MergedDir": "/var/lib/docker/overlay2/53a9f31c7221bec8b80f37e0289b6756485369937133a44e2d9cce8762a2d435/merged",
	                "UpperDir": "/var/lib/docker/overlay2/53a9f31c7221bec8b80f37e0289b6756485369937133a44e2d9cce8762a2d435/diff",
	                "WorkDir": "/var/lib/docker/overlay2/53a9f31c7221bec8b80f37e0289b6756485369937133a44e2d9cce8762a2d435/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-203169",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-203169/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-203169",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-203169",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-203169",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "35eb138fcee02db5dab7499ac8c3ba58429dc1ee55e993f58e3d0f7b96502469",
	            "SandboxKey": "/var/run/docker/netns/35eb138fcee0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33030"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33031"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33034"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33032"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33033"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-203169": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:7e:f8:d3:31:e5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ede15371a991ed38cb38e3918f88472413f5bf6d4ae64fd81e495fa041a0e14e",
	                    "EndpointID": "4ca58b3cad111a6188cde9ab946d680f861f46b0e2f86bf7a37a8b6ac02ad232",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-203169",
	                        "23e665ad7ab1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-203169 -n force-systemd-flag-203169
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-203169 -n force-systemd-flag-203169: exit status 6 (328.161172ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 19:47:50.286931  178190 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-203169" does not appear in /home/jenkins/minikube-integration/21701-2540/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-203169 logs -n 25
helpers_test.go:260: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-053944 sudo systemctl cat kubelet --no-pager                                                     │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo systemctl status docker --all --full --no-pager                                      │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo systemctl cat docker --no-pager                                                      │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo cat /etc/docker/daemon.json                                                          │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo docker system info                                                                   │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo cri-dockerd --version                                                                │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo systemctl cat containerd --no-pager                                                  │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo cat /etc/containerd/config.toml                                                      │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo containerd config dump                                                               │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo systemctl status crio --all --full --no-pager                                        │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo systemctl cat crio --no-pager                                                        │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo crio config                                                                          │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ delete  │ -p cilium-053944                                                                                           │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │ 06 Oct 25 19:40 UTC │
	│ start   │ -p force-systemd-env-760371 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-760371  │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ force-systemd-flag-203169 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                       │ force-systemd-flag-203169 │ jenkins │ v1.37.0 │ 06 Oct 25 19:47 UTC │ 06 Oct 25 19:47 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 19:40:51
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 19:40:51.797040  174295 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:40:51.797217  174295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:40:51.797230  174295 out.go:374] Setting ErrFile to fd 2...
	I1006 19:40:51.797236  174295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:40:51.797518  174295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:40:51.797980  174295 out.go:368] Setting JSON to false
	I1006 19:40:51.798903  174295 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4987,"bootTime":1759774665,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1006 19:40:51.798971  174295 start.go:140] virtualization:  
	I1006 19:40:51.802558  174295 out.go:179] * [force-systemd-env-760371] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 19:40:51.805738  174295 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 19:40:51.805811  174295 notify.go:220] Checking for updates...
	I1006 19:40:51.811990  174295 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 19:40:51.815011  174295 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:40:51.817888  174295 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	I1006 19:40:51.820795  174295 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 19:40:51.823663  174295 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1006 19:40:51.827058  174295 config.go:182] Loaded profile config "force-systemd-flag-203169": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:40:51.827168  174295 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 19:40:51.860116  174295 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 19:40:51.860236  174295 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:40:51.922437  174295 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:40:51.913312997 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:40:51.922562  174295 docker.go:318] overlay module found
	I1006 19:40:51.925709  174295 out.go:179] * Using the docker driver based on user configuration
	I1006 19:40:51.928544  174295 start.go:304] selected driver: docker
	I1006 19:40:51.928564  174295 start.go:924] validating driver "docker" against <nil>
	I1006 19:40:51.928577  174295 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 19:40:51.929331  174295 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:40:51.983563  174295 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:40:51.973907642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:40:51.983888  174295 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 19:40:51.984141  174295 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1006 19:40:51.987270  174295 out.go:179] * Using Docker driver with root privileges
	I1006 19:40:51.990162  174295 cni.go:84] Creating CNI manager for ""
	I1006 19:40:51.990244  174295 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:40:51.990259  174295 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 19:40:51.990349  174295 start.go:348] cluster config:
	{Name:force-systemd-env-760371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-760371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:40:51.993569  174295 out.go:179] * Starting "force-systemd-env-760371" primary control-plane node in "force-systemd-env-760371" cluster
	I1006 19:40:51.996529  174295 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 19:40:51.999434  174295 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 19:40:52.002305  174295 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:40:52.002353  174295 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 19:40:52.002376  174295 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1006 19:40:52.002394  174295 cache.go:58] Caching tarball of preloaded images
	I1006 19:40:52.002486  174295 preload.go:233] Found /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1006 19:40:52.002495  174295 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 19:40:52.002614  174295 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/config.json ...
	I1006 19:40:52.002645  174295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/config.json: {Name:mka31f5862185485bf03db99e8df838b3a1c83e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:40:52.031927  174295 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 19:40:52.031953  174295 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 19:40:52.031982  174295 cache.go:232] Successfully downloaded all kic artifacts
	I1006 19:40:52.032007  174295 start.go:360] acquireMachinesLock for force-systemd-env-760371: {Name:mk3287ebe7916dc03109d9ffe39570f41d010e75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 19:40:52.032126  174295 start.go:364] duration metric: took 96.714µs to acquireMachinesLock for "force-systemd-env-760371"
	I1006 19:40:52.032159  174295 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-760371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-760371 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 19:40:52.032230  174295 start.go:125] createHost starting for "" (driver="docker")
	I1006 19:40:52.035692  174295 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1006 19:40:52.035965  174295 start.go:159] libmachine.API.Create for "force-systemd-env-760371" (driver="docker")
	I1006 19:40:52.036013  174295 client.go:168] LocalClient.Create starting
	I1006 19:40:52.036093  174295 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem
	I1006 19:40:52.036138  174295 main.go:141] libmachine: Decoding PEM data...
	I1006 19:40:52.036164  174295 main.go:141] libmachine: Parsing certificate...
	I1006 19:40:52.036228  174295 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem
	I1006 19:40:52.036251  174295 main.go:141] libmachine: Decoding PEM data...
	I1006 19:40:52.036267  174295 main.go:141] libmachine: Parsing certificate...
	I1006 19:40:52.036675  174295 cli_runner.go:164] Run: docker network inspect force-systemd-env-760371 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 19:40:52.054537  174295 cli_runner.go:211] docker network inspect force-systemd-env-760371 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 19:40:52.054631  174295 network_create.go:284] running [docker network inspect force-systemd-env-760371] to gather additional debugging logs...
	I1006 19:40:52.054649  174295 cli_runner.go:164] Run: docker network inspect force-systemd-env-760371
	W1006 19:40:52.071663  174295 cli_runner.go:211] docker network inspect force-systemd-env-760371 returned with exit code 1
	I1006 19:40:52.071692  174295 network_create.go:287] error running [docker network inspect force-systemd-env-760371]: docker network inspect force-systemd-env-760371: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-760371 not found
	I1006 19:40:52.071715  174295 network_create.go:289] output of [docker network inspect force-systemd-env-760371]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-760371 not found
	
	** /stderr **
	I1006 19:40:52.071861  174295 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:40:52.089269  174295 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7058eae896da IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:57:c0:bd:de:1b} reservation:<nil>}
	I1006 19:40:52.089609  174295 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e321ce8cf6dc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:12:e6:76:87:fe:bb} reservation:<nil>}
	I1006 19:40:52.089919  174295 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-17bdbd7bd7be IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:22:93:f3:19:55:10} reservation:<nil>}
	I1006 19:40:52.090359  174295 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a16900}
	I1006 19:40:52.090382  174295 network_create.go:124] attempt to create docker network force-systemd-env-760371 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1006 19:40:52.090445  174295 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-760371 force-systemd-env-760371
	I1006 19:40:52.156058  174295 network_create.go:108] docker network force-systemd-env-760371 192.168.76.0/24 created
	I1006 19:40:52.156089  174295 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-760371" container
	I1006 19:40:52.156160  174295 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 19:40:52.172990  174295 cli_runner.go:164] Run: docker volume create force-systemd-env-760371 --label name.minikube.sigs.k8s.io=force-systemd-env-760371 --label created_by.minikube.sigs.k8s.io=true
	I1006 19:40:52.191801  174295 oci.go:103] Successfully created a docker volume force-systemd-env-760371
	I1006 19:40:52.191893  174295 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-760371-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-760371 --entrypoint /usr/bin/test -v force-systemd-env-760371:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 19:40:52.716336  174295 oci.go:107] Successfully prepared a docker volume force-systemd-env-760371
	I1006 19:40:52.716385  174295 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:40:52.716405  174295 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 19:40:52.716480  174295 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-760371:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 19:40:57.156119  174295 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-760371:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.439598228s)
	I1006 19:40:57.156150  174295 kic.go:203] duration metric: took 4.439741319s to extract preloaded images to volume ...
	W1006 19:40:57.156301  174295 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1006 19:40:57.156416  174295 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 19:40:57.206891  174295 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-760371 --name force-systemd-env-760371 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-760371 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-760371 --network force-systemd-env-760371 --ip 192.168.76.2 --volume force-systemd-env-760371:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 19:40:57.520934  174295 cli_runner.go:164] Run: docker container inspect force-systemd-env-760371 --format={{.State.Running}}
	I1006 19:40:57.552151  174295 cli_runner.go:164] Run: docker container inspect force-systemd-env-760371 --format={{.State.Status}}
	I1006 19:40:57.574692  174295 cli_runner.go:164] Run: docker exec force-systemd-env-760371 stat /var/lib/dpkg/alternatives/iptables
	I1006 19:40:57.622089  174295 oci.go:144] the created container "force-systemd-env-760371" has a running status.
	I1006 19:40:57.622131  174295 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/force-systemd-env-760371/id_rsa...
	I1006 19:40:59.583107  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/force-systemd-env-760371/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1006 19:40:59.583203  174295 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-2540/.minikube/machines/force-systemd-env-760371/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 19:40:59.604321  174295 cli_runner.go:164] Run: docker container inspect force-systemd-env-760371 --format={{.State.Status}}
	I1006 19:40:59.625126  174295 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 19:40:59.625145  174295 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-760371 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 19:40:59.673877  174295 cli_runner.go:164] Run: docker container inspect force-systemd-env-760371 --format={{.State.Status}}
	I1006 19:40:59.692877  174295 machine.go:93] provisionDockerMachine start ...
	I1006 19:40:59.692967  174295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-760371
	I1006 19:40:59.711656  174295 main.go:141] libmachine: Using SSH client type: native
	I1006 19:40:59.712023  174295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33035 <nil> <nil>}
	I1006 19:40:59.712040  174295 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 19:40:59.847484  174295 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-760371
	
	I1006 19:40:59.847510  174295 ubuntu.go:182] provisioning hostname "force-systemd-env-760371"
	I1006 19:40:59.847601  174295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-760371
	I1006 19:40:59.867121  174295 main.go:141] libmachine: Using SSH client type: native
	I1006 19:40:59.867449  174295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33035 <nil> <nil>}
	I1006 19:40:59.867465  174295 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-760371 && echo "force-systemd-env-760371" | sudo tee /etc/hostname
	I1006 19:41:00.038864  174295 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-760371
	
	I1006 19:41:00.038995  174295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-760371
	I1006 19:41:00.132980  174295 main.go:141] libmachine: Using SSH client type: native
	I1006 19:41:00.133301  174295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33035 <nil> <nil>}
	I1006 19:41:00.133319  174295 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-760371' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-760371/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-760371' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 19:41:00.464782  174295 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 19:41:00.464813  174295 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-2540/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-2540/.minikube}
	I1006 19:41:00.464836  174295 ubuntu.go:190] setting up certificates
	I1006 19:41:00.464846  174295 provision.go:84] configureAuth start
	I1006 19:41:00.464916  174295 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-760371
	I1006 19:41:00.485780  174295 provision.go:143] copyHostCerts
	I1006 19:41:00.485844  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem
	I1006 19:41:00.485887  174295 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem, removing ...
	I1006 19:41:00.485905  174295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem
	I1006 19:41:00.485995  174295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem (1082 bytes)
	I1006 19:41:00.486107  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem
	I1006 19:41:00.486132  174295 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem, removing ...
	I1006 19:41:00.486138  174295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem
	I1006 19:41:00.486168  174295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem (1123 bytes)
	I1006 19:41:00.486227  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem
	I1006 19:41:00.486247  174295 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem, removing ...
	I1006 19:41:00.486252  174295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem
	I1006 19:41:00.486290  174295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem (1675 bytes)
	I1006 19:41:00.486368  174295 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-760371 san=[127.0.0.1 192.168.76.2 force-systemd-env-760371 localhost minikube]
	I1006 19:41:01.528666  174295 provision.go:177] copyRemoteCerts
	I1006 19:41:01.528734  174295 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 19:41:01.528772  174295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-760371
	I1006 19:41:01.546025  174295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33035 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/force-systemd-env-760371/id_rsa Username:docker}
	I1006 19:41:01.643653  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 19:41:01.643740  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 19:41:01.663314  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 19:41:01.663444  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1006 19:41:01.682495  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 19:41:01.682576  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 19:41:01.700705  174295 provision.go:87] duration metric: took 1.235845158s to configureAuth
	I1006 19:41:01.700743  174295 ubuntu.go:206] setting minikube options for container-runtime
	I1006 19:41:01.700919  174295 config.go:182] Loaded profile config "force-systemd-env-760371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:41:01.701015  174295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-760371
	I1006 19:41:01.718704  174295 main.go:141] libmachine: Using SSH client type: native
	I1006 19:41:01.719024  174295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33035 <nil> <nil>}
	I1006 19:41:01.719045  174295 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 19:41:01.965441  174295 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 19:41:01.965513  174295 machine.go:96] duration metric: took 2.272604241s to provisionDockerMachine
	I1006 19:41:01.965538  174295 client.go:171] duration metric: took 9.929513108s to LocalClient.Create
	I1006 19:41:01.965585  174295 start.go:167] duration metric: took 9.929621252s to libmachine.API.Create "force-systemd-env-760371"
	I1006 19:41:01.965609  174295 start.go:293] postStartSetup for "force-systemd-env-760371" (driver="docker")
	I1006 19:41:01.965631  174295 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 19:41:01.965717  174295 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 19:41:01.965783  174295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-760371
	I1006 19:41:01.983293  174295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33035 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/force-systemd-env-760371/id_rsa Username:docker}
	I1006 19:41:02.084270  174295 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 19:41:02.087759  174295 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 19:41:02.087834  174295 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 19:41:02.087851  174295 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/addons for local assets ...
	I1006 19:41:02.087905  174295 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/files for local assets ...
	I1006 19:41:02.088001  174295 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> 43502.pem in /etc/ssl/certs
	I1006 19:41:02.088013  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> /etc/ssl/certs/43502.pem
	I1006 19:41:02.088112  174295 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 19:41:02.095840  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:41:02.114882  174295 start.go:296] duration metric: took 149.246191ms for postStartSetup
	I1006 19:41:02.115392  174295 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-760371
	I1006 19:41:02.132467  174295 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/config.json ...
	I1006 19:41:02.132769  174295 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 19:41:02.132819  174295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-760371
	I1006 19:41:02.149974  174295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33035 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/force-systemd-env-760371/id_rsa Username:docker}
	I1006 19:41:02.244932  174295 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 19:41:02.249599  174295 start.go:128] duration metric: took 10.217354455s to createHost
	I1006 19:41:02.249623  174295 start.go:83] releasing machines lock for "force-systemd-env-760371", held for 10.2174823s
	I1006 19:41:02.249714  174295 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-760371
	I1006 19:41:02.272838  174295 ssh_runner.go:195] Run: cat /version.json
	I1006 19:41:02.272900  174295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-760371
	I1006 19:41:02.273189  174295 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 19:41:02.273253  174295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-760371
	I1006 19:41:02.294414  174295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33035 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/force-systemd-env-760371/id_rsa Username:docker}
	I1006 19:41:02.305374  174295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33035 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/force-systemd-env-760371/id_rsa Username:docker}
	I1006 19:41:02.391513  174295 ssh_runner.go:195] Run: systemctl --version
	I1006 19:41:02.481923  174295 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 19:41:02.520709  174295 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 19:41:02.525156  174295 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 19:41:02.525269  174295 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 19:41:02.553233  174295 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1006 19:41:02.553257  174295 start.go:495] detecting cgroup driver to use...
	I1006 19:41:02.553273  174295 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1006 19:41:02.553329  174295 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 19:41:02.570614  174295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 19:41:02.583481  174295 docker.go:218] disabling cri-docker service (if available) ...
	I1006 19:41:02.583543  174295 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 19:41:02.601088  174295 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 19:41:02.620821  174295 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 19:41:02.737845  174295 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 19:41:02.867493  174295 docker.go:234] disabling docker service ...
	I1006 19:41:02.867572  174295 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 19:41:02.888701  174295 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 19:41:02.902478  174295 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 19:41:03.028067  174295 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 19:41:03.142206  174295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 19:41:03.154863  174295 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 19:41:03.170299  174295 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 19:41:03.170385  174295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:41:03.179993  174295 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 19:41:03.180106  174295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:41:03.190105  174295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:41:03.199593  174295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:41:03.209050  174295 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 19:41:03.217073  174295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:41:03.226848  174295 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:41:03.240344  174295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:41:03.249485  174295 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 19:41:03.257053  174295 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 19:41:03.264276  174295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:41:03.370002  174295 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 19:41:03.509936  174295 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 19:41:03.510053  174295 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 19:41:03.513874  174295 start.go:563] Will wait 60s for crictl version
	I1006 19:41:03.513987  174295 ssh_runner.go:195] Run: which crictl
	I1006 19:41:03.517503  174295 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 19:41:03.542274  174295 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 19:41:03.542444  174295 ssh_runner.go:195] Run: crio --version
	I1006 19:41:03.573323  174295 ssh_runner.go:195] Run: crio --version
	I1006 19:41:03.606570  174295 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 19:41:03.609473  174295 cli_runner.go:164] Run: docker network inspect force-systemd-env-760371 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:41:03.625527  174295 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1006 19:41:03.629485  174295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:41:03.640036  174295 kubeadm.go:883] updating cluster {Name:force-systemd-env-760371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-760371 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 19:41:03.640151  174295 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:41:03.640213  174295 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:41:03.674093  174295 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:41:03.674116  174295 crio.go:433] Images already preloaded, skipping extraction
	I1006 19:41:03.674169  174295 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:41:03.698815  174295 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:41:03.698837  174295 cache_images.go:85] Images are preloaded, skipping loading
	I1006 19:41:03.698845  174295 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1006 19:41:03.698942  174295 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-760371 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-760371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 19:41:03.699032  174295 ssh_runner.go:195] Run: crio config
	I1006 19:41:03.766351  174295 cni.go:84] Creating CNI manager for ""
	I1006 19:41:03.766375  174295 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:41:03.766390  174295 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 19:41:03.766416  174295 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-760371 NodeName:force-systemd-env-760371 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 19:41:03.766621  174295 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-760371"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 19:41:03.766713  174295 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 19:41:03.774492  174295 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 19:41:03.774579  174295 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 19:41:03.782430  174295 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1006 19:41:03.795739  174295 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 19:41:03.808673  174295 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1006 19:41:03.822858  174295 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1006 19:41:03.826601  174295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:41:03.836682  174295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:41:03.960387  174295 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:41:03.976893  174295 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371 for IP: 192.168.76.2
	I1006 19:41:03.976912  174295 certs.go:195] generating shared ca certs ...
	I1006 19:41:03.976928  174295 certs.go:227] acquiring lock for ca certs: {Name:mke29bef1b13829052576090d66d8864f7cbc64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:41:03.977062  174295 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key
	I1006 19:41:03.977109  174295 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key
	I1006 19:41:03.977121  174295 certs.go:257] generating profile certs ...
	I1006 19:41:03.977183  174295 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/client.key
	I1006 19:41:03.977198  174295 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/client.crt with IP's: []
	I1006 19:41:05.086387  174295 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/client.crt ...
	I1006 19:41:05.086423  174295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/client.crt: {Name:mk4601cd87b01add89161db4ec97c2390e11c2fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:41:05.086623  174295 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/client.key ...
	I1006 19:41:05.086640  174295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/client.key: {Name:mkaac636eddfa1ad2ecfe724a4295502dd613c01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:41:05.086734  174295 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/apiserver.key.1944f0cc
	I1006 19:41:05.086751  174295 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/apiserver.crt.1944f0cc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1006 19:41:05.650961  174295 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/apiserver.crt.1944f0cc ...
	I1006 19:41:05.650992  174295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/apiserver.crt.1944f0cc: {Name:mk943e655c584b929141ef2fe7f12923c8e0fa73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:41:05.651174  174295 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/apiserver.key.1944f0cc ...
	I1006 19:41:05.651188  174295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/apiserver.key.1944f0cc: {Name:mk860322bda7fa95f3ba379abadc4bd168011eae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:41:05.651276  174295 certs.go:382] copying /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/apiserver.crt.1944f0cc -> /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/apiserver.crt
	I1006 19:41:05.651355  174295 certs.go:386] copying /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/apiserver.key.1944f0cc -> /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/apiserver.key
	I1006 19:41:05.651415  174295 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/proxy-client.key
	I1006 19:41:05.651432  174295 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/proxy-client.crt with IP's: []
	I1006 19:41:06.233074  174295 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/proxy-client.crt ...
	I1006 19:41:06.233105  174295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/proxy-client.crt: {Name:mkc7fd3c3aa33713543f14b4493f54b98b3b84f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:41:06.233276  174295 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/proxy-client.key ...
	I1006 19:41:06.233292  174295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/proxy-client.key: {Name:mk7b7f5c8dcf06fddd86deab74c50754274e64ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:41:06.233370  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 19:41:06.233391  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 19:41:06.233404  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 19:41:06.233420  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 19:41:06.233433  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 19:41:06.233451  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 19:41:06.233467  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 19:41:06.233482  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 19:41:06.233549  174295 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem (1338 bytes)
	W1006 19:41:06.233590  174295 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350_empty.pem, impossibly tiny 0 bytes
	I1006 19:41:06.233603  174295 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 19:41:06.233632  174295 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem (1082 bytes)
	I1006 19:41:06.233655  174295 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem (1123 bytes)
	I1006 19:41:06.233684  174295 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem (1675 bytes)
	I1006 19:41:06.233731  174295 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:41:06.233762  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> /usr/share/ca-certificates/43502.pem
	I1006 19:41:06.233775  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:41:06.233785  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem -> /usr/share/ca-certificates/4350.pem
	I1006 19:41:06.234402  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 19:41:06.254144  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 19:41:06.273819  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 19:41:06.297082  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 19:41:06.314647  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1006 19:41:06.331886  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1006 19:41:06.349004  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 19:41:06.367448  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 19:41:06.384819  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /usr/share/ca-certificates/43502.pem (1708 bytes)
	I1006 19:41:06.402292  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 19:41:06.421555  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem --> /usr/share/ca-certificates/4350.pem (1338 bytes)
	I1006 19:41:06.438947  174295 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 19:41:06.452527  174295 ssh_runner.go:195] Run: openssl version
	I1006 19:41:06.458967  174295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 19:41:06.467626  174295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:41:06.471331  174295 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:41:06.471403  174295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:41:06.512784  174295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 19:41:06.521283  174295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4350.pem && ln -fs /usr/share/ca-certificates/4350.pem /etc/ssl/certs/4350.pem"
	I1006 19:41:06.529716  174295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4350.pem
	I1006 19:41:06.533562  174295 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 18:49 /usr/share/ca-certificates/4350.pem
	I1006 19:41:06.533625  174295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4350.pem
	I1006 19:41:06.575558  174295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4350.pem /etc/ssl/certs/51391683.0"
	I1006 19:41:06.584945  174295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43502.pem && ln -fs /usr/share/ca-certificates/43502.pem /etc/ssl/certs/43502.pem"
	I1006 19:41:06.593816  174295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43502.pem
	I1006 19:41:06.598167  174295 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 18:49 /usr/share/ca-certificates/43502.pem
	I1006 19:41:06.598335  174295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43502.pem
	I1006 19:41:06.645983  174295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43502.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 19:41:06.655833  174295 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 19:41:06.659558  174295 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 19:41:06.659610  174295 kubeadm.go:400] StartCluster: {Name:force-systemd-env-760371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-760371 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:41:06.659757  174295 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 19:41:06.659823  174295 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 19:41:06.686617  174295 cri.go:89] found id: ""
	I1006 19:41:06.686702  174295 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 19:41:06.694762  174295 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 19:41:06.702594  174295 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 19:41:06.702685  174295 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 19:41:06.710811  174295 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 19:41:06.710832  174295 kubeadm.go:157] found existing configuration files:
	
	I1006 19:41:06.710888  174295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 19:41:06.718787  174295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 19:41:06.718877  174295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 19:41:06.726392  174295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 19:41:06.734171  174295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 19:41:06.734307  174295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 19:41:06.741860  174295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 19:41:06.749926  174295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 19:41:06.750011  174295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 19:41:06.757695  174295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 19:41:06.765687  174295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 19:41:06.765755  174295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 19:41:06.773349  174295 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 19:41:06.814039  174295 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 19:41:06.814112  174295 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 19:41:06.840097  174295 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 19:41:06.840176  174295 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1006 19:41:06.840219  174295 kubeadm.go:318] OS: Linux
	I1006 19:41:06.840271  174295 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 19:41:06.840327  174295 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1006 19:41:06.840380  174295 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 19:41:06.840435  174295 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 19:41:06.840489  174295 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 19:41:06.840544  174295 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 19:41:06.840596  174295 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 19:41:06.840650  174295 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 19:41:06.840703  174295 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1006 19:41:06.911603  174295 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 19:41:06.911748  174295 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 19:41:06.911846  174295 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 19:41:06.924221  174295 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 19:41:06.927509  174295 out.go:252]   - Generating certificates and keys ...
	I1006 19:41:06.927651  174295 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 19:41:06.927776  174295 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 19:41:07.110656  174295 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 19:41:07.617959  174295 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 19:41:07.935778  174295 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 19:41:08.084012  174295 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 19:41:08.503643  174295 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 19:41:08.503818  174295 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-760371 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1006 19:41:08.549707  174295 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 19:41:08.550020  174295 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-760371 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1006 19:41:08.741675  174295 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 19:41:09.663097  174295 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 19:41:10.822315  174295 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 19:41:10.822845  174295 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 19:41:11.186368  174295 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 19:41:11.993890  174295 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 19:41:12.587160  174295 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 19:41:13.048610  174295 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 19:41:13.393251  174295 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 19:41:13.394054  174295 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 19:41:13.397031  174295 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 19:41:13.400489  174295 out.go:252]   - Booting up control plane ...
	I1006 19:41:13.400611  174295 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 19:41:13.400702  174295 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 19:41:13.402389  174295 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 19:41:13.422255  174295 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 19:41:13.422371  174295 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 19:41:13.429765  174295 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 19:41:13.430143  174295 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 19:41:13.430444  174295 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 19:41:13.557208  174295 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 19:41:13.557336  174295 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 19:41:14.560096  174295 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001807913s
	I1006 19:41:14.562623  174295 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 19:41:14.562725  174295 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1006 19:41:14.562853  174295 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 19:41:14.562942  174295 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 19:43:45.612712  168504 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000956017s
	I1006 19:43:45.612845  168504 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001041146s
	I1006 19:43:45.613693  168504 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000990122s
	I1006 19:43:45.613822  168504 kubeadm.go:318] 
	I1006 19:43:45.613929  168504 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 19:43:45.614028  168504 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 19:43:45.614342  168504 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 19:43:45.614457  168504 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 19:43:45.614610  168504 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 19:43:45.614761  168504 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 19:43:45.614790  168504 kubeadm.go:318] 
	I1006 19:43:45.620169  168504 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1006 19:43:45.620410  168504 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1006 19:43:45.620521  168504 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 19:43:45.621104  168504 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 19:43:45.621176  168504 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1006 19:43:45.621308  168504 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-203169 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-203169 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 3.001969282s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000956017s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001041146s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000990122s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1006 19:43:45.621386  168504 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1006 19:43:46.127928  168504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:43:46.141141  168504 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 19:43:46.141209  168504 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 19:43:46.149317  168504 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 19:43:46.149339  168504 kubeadm.go:157] found existing configuration files:
	
	I1006 19:43:46.149389  168504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 19:43:46.157329  168504 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 19:43:46.157393  168504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 19:43:46.165142  168504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 19:43:46.173158  168504 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 19:43:46.173234  168504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 19:43:46.181037  168504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 19:43:46.189206  168504 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 19:43:46.189277  168504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 19:43:46.197104  168504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 19:43:46.205450  168504 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 19:43:46.205519  168504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 19:43:46.213647  168504 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 19:43:46.254264  168504 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 19:43:46.254471  168504 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 19:43:46.287961  168504 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 19:43:46.288033  168504 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1006 19:43:46.288071  168504 kubeadm.go:318] OS: Linux
	I1006 19:43:46.288119  168504 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 19:43:46.288170  168504 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1006 19:43:46.288220  168504 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 19:43:46.288274  168504 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 19:43:46.288324  168504 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 19:43:46.288375  168504 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 19:43:46.288422  168504 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 19:43:46.288486  168504 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 19:43:46.288535  168504 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1006 19:43:46.358617  168504 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 19:43:46.358726  168504 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 19:43:46.358817  168504 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 19:43:46.370288  168504 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 19:43:46.377053  168504 out.go:252]   - Generating certificates and keys ...
	I1006 19:43:46.377146  168504 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 19:43:46.377216  168504 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 19:43:46.377299  168504 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1006 19:43:46.377397  168504 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1006 19:43:46.377556  168504 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1006 19:43:46.377624  168504 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1006 19:43:46.377692  168504 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1006 19:43:46.377779  168504 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1006 19:43:46.377860  168504 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1006 19:43:46.378138  168504 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1006 19:43:46.378381  168504 kubeadm.go:318] [certs] Using the existing "sa" key
	I1006 19:43:46.378445  168504 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 19:43:46.573987  168504 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 19:43:46.702887  168504 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 19:43:47.063322  168504 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 19:43:47.456922  168504 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 19:43:47.901735  168504 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 19:43:47.902964  168504 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 19:43:47.907003  168504 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 19:43:47.910295  168504 out.go:252]   - Booting up control plane ...
	I1006 19:43:47.910396  168504 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 19:43:47.910473  168504 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 19:43:47.910540  168504 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 19:43:47.925029  168504 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 19:43:47.925767  168504 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 19:43:47.933355  168504 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 19:43:47.933664  168504 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 19:43:47.933722  168504 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 19:43:48.081873  168504 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 19:43:48.081997  168504 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 19:43:49.082616  168504 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000838819s
	I1006 19:43:49.088801  168504 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 19:43:49.088905  168504 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1006 19:43:49.089021  168504 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 19:43:49.089127  168504 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 19:45:14.563760  174295 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000821405s
	I1006 19:45:14.563949  174295 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000589905s
	I1006 19:45:14.565243  174295 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.002667498s
	I1006 19:45:14.565259  174295 kubeadm.go:318] 
	I1006 19:45:14.565358  174295 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 19:45:14.565459  174295 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 19:45:14.565567  174295 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 19:45:14.565856  174295 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 19:45:14.565949  174295 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 19:45:14.566034  174295 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 19:45:14.566043  174295 kubeadm.go:318] 
	I1006 19:45:14.570253  174295 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1006 19:45:14.570586  174295 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1006 19:45:14.570735  174295 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 19:45:14.571379  174295 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1006 19:45:14.571532  174295 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1006 19:45:14.571600  174295 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-760371 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-760371 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001807913s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000821405s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000589905s
	[control-plane-check] kube-scheduler is not healthy after 4m0.002667498s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1006 19:45:14.571686  174295 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1006 19:45:15.135509  174295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:45:15.149786  174295 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 19:45:15.149854  174295 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 19:45:15.157922  174295 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 19:45:15.157949  174295 kubeadm.go:157] found existing configuration files:
	
	I1006 19:45:15.158017  174295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 19:45:15.166151  174295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 19:45:15.166223  174295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 19:45:15.174347  174295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 19:45:15.182930  174295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 19:45:15.182996  174295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 19:45:15.190988  174295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 19:45:15.199110  174295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 19:45:15.199175  174295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 19:45:15.206901  174295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 19:45:15.214652  174295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 19:45:15.214740  174295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 19:45:15.222694  174295 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 19:45:15.266638  174295 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 19:45:15.266887  174295 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 19:45:15.290228  174295 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 19:45:15.290379  174295 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1006 19:45:15.290455  174295 kubeadm.go:318] OS: Linux
	I1006 19:45:15.290543  174295 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 19:45:15.290636  174295 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1006 19:45:15.290720  174295 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 19:45:15.290802  174295 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 19:45:15.290885  174295 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 19:45:15.290974  174295 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 19:45:15.291051  174295 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 19:45:15.291132  174295 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 19:45:15.291213  174295 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1006 19:45:15.362442  174295 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 19:45:15.362589  174295 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 19:45:15.362719  174295 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 19:45:15.369346  174295 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 19:45:15.376435  174295 out.go:252]   - Generating certificates and keys ...
	I1006 19:45:15.376561  174295 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 19:45:15.376650  174295 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 19:45:15.376745  174295 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1006 19:45:15.376822  174295 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1006 19:45:15.376921  174295 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1006 19:45:15.377019  174295 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1006 19:45:15.377134  174295 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1006 19:45:15.377237  174295 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1006 19:45:15.377357  174295 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1006 19:45:15.377465  174295 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1006 19:45:15.377531  174295 kubeadm.go:318] [certs] Using the existing "sa" key
	I1006 19:45:15.377615  174295 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 19:45:15.845806  174295 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 19:45:16.232193  174295 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 19:45:17.984854  174295 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 19:45:18.461357  174295 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 19:45:18.946485  174295 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 19:45:18.947350  174295 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 19:45:18.950114  174295 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 19:45:18.954413  174295 out.go:252]   - Booting up control plane ...
	I1006 19:45:18.954514  174295 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 19:45:18.954592  174295 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 19:45:18.954661  174295 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 19:45:18.971046  174295 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 19:45:18.971186  174295 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 19:45:18.979065  174295 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 19:45:18.979554  174295 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 19:45:18.979643  174295 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 19:45:19.133097  174295 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 19:45:19.133219  174295 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 19:45:20.633601  174295 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500707292s
	I1006 19:45:20.636956  174295 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 19:45:20.637048  174295 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1006 19:45:20.637371  174295 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 19:45:20.637460  174295 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 19:47:49.087468  168504 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00036164s
	I1006 19:47:49.087637  168504 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000196379s
	I1006 19:47:49.088608  168504 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000819536s
	I1006 19:47:49.088634  168504 kubeadm.go:318] 
	I1006 19:47:49.088803  168504 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 19:47:49.088951  168504 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 19:47:49.089112  168504 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 19:47:49.089286  168504 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 19:47:49.089591  168504 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 19:47:49.089744  168504 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 19:47:49.089750  168504 kubeadm.go:318] 
	I1006 19:47:49.094209  168504 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1006 19:47:49.094556  168504 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1006 19:47:49.094712  168504 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 19:47:49.095472  168504 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1006 19:47:49.095572  168504 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 19:47:49.095646  168504 kubeadm.go:402] duration metric: took 8m13.906100786s to StartCluster
	I1006 19:47:49.095722  168504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 19:47:49.095790  168504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 19:47:49.122017  168504 cri.go:89] found id: ""
	I1006 19:47:49.122048  168504 logs.go:282] 0 containers: []
	W1006 19:47:49.122057  168504 logs.go:284] No container was found matching "kube-apiserver"
	I1006 19:47:49.122064  168504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 19:47:49.122122  168504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 19:47:49.149475  168504 cri.go:89] found id: ""
	I1006 19:47:49.149498  168504 logs.go:282] 0 containers: []
	W1006 19:47:49.149507  168504 logs.go:284] No container was found matching "etcd"
	I1006 19:47:49.149513  168504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 19:47:49.149579  168504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 19:47:49.175030  168504 cri.go:89] found id: ""
	I1006 19:47:49.175052  168504 logs.go:282] 0 containers: []
	W1006 19:47:49.175061  168504 logs.go:284] No container was found matching "coredns"
	I1006 19:47:49.175068  168504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 19:47:49.175127  168504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 19:47:49.200883  168504 cri.go:89] found id: ""
	I1006 19:47:49.200904  168504 logs.go:282] 0 containers: []
	W1006 19:47:49.200913  168504 logs.go:284] No container was found matching "kube-scheduler"
	I1006 19:47:49.200919  168504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 19:47:49.200980  168504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 19:47:49.226947  168504 cri.go:89] found id: ""
	I1006 19:47:49.226971  168504 logs.go:282] 0 containers: []
	W1006 19:47:49.226980  168504 logs.go:284] No container was found matching "kube-proxy"
	I1006 19:47:49.227001  168504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 19:47:49.227063  168504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 19:47:49.253216  168504 cri.go:89] found id: ""
	I1006 19:47:49.253241  168504 logs.go:282] 0 containers: []
	W1006 19:47:49.253249  168504 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 19:47:49.253256  168504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 19:47:49.253315  168504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 19:47:49.284772  168504 cri.go:89] found id: ""
	I1006 19:47:49.284794  168504 logs.go:282] 0 containers: []
	W1006 19:47:49.284803  168504 logs.go:284] No container was found matching "kindnet"
	I1006 19:47:49.284812  168504 logs.go:123] Gathering logs for kubelet ...
	I1006 19:47:49.284823  168504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 19:47:49.375258  168504 logs.go:123] Gathering logs for dmesg ...
	I1006 19:47:49.375291  168504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 19:47:49.390955  168504 logs.go:123] Gathering logs for describe nodes ...
	I1006 19:47:49.390984  168504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 19:47:49.463091  168504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 19:47:49.454380    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:47:49.455411    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:47:49.456744    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:47:49.457362    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:47:49.459020    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 19:47:49.454380    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:47:49.455411    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:47:49.456744    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:47:49.457362    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:47:49.459020    2372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 19:47:49.463112  168504 logs.go:123] Gathering logs for CRI-O ...
	I1006 19:47:49.463126  168504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 19:47:49.538724  168504 logs.go:123] Gathering logs for container status ...
	I1006 19:47:49.538764  168504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1006 19:47:49.573145  168504 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000838819s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.00036164s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000196379s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000819536s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1006 19:47:49.573217  168504 out.go:285] * 
	W1006 19:47:49.573285  168504 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000838819s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.00036164s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000196379s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000819536s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 19:47:49.573312  168504 out.go:285] * 
	W1006 19:47:49.576106  168504 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 19:47:49.583442  168504 out.go:203] 
	W1006 19:47:49.586296  168504 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000838819s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.00036164s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000196379s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000819536s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 19:47:49.586322  168504 out.go:285] * 
	I1006 19:47:49.591381  168504 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 19:47:38 force-systemd-flag-203169 crio[843]: time="2025-10-06T19:47:38.866409735Z" level=info msg="createCtr: removing container 628c5ae4a32f58e392184036dccc63944168a19926ff6b83a428bcb2e1d3ea94" id=35d54d27-6bb8-4bd2-b84f-b0e6070e93dc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:47:38 force-systemd-flag-203169 crio[843]: time="2025-10-06T19:47:38.866447373Z" level=info msg="createCtr: deleting container 628c5ae4a32f58e392184036dccc63944168a19926ff6b83a428bcb2e1d3ea94 from storage" id=35d54d27-6bb8-4bd2-b84f-b0e6070e93dc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:47:38 force-systemd-flag-203169 crio[843]: time="2025-10-06T19:47:38.869092178Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-force-systemd-flag-203169_kube-system_04002c091966ad10d5f9010a2b4f5ddd_0" id=35d54d27-6bb8-4bd2-b84f-b0e6070e93dc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:47:43 force-systemd-flag-203169 crio[843]: time="2025-10-06T19:47:43.84625709Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=83f0cb5f-8132-4f38-8db3-da7977a31f58 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:47:43 force-systemd-flag-203169 crio[843]: time="2025-10-06T19:47:43.84708701Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=e81394ee-e38b-413f-b22f-e4b8d99e1be5 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:47:43 force-systemd-flag-203169 crio[843]: time="2025-10-06T19:47:43.84795341Z" level=info msg="Creating container: kube-system/kube-controller-manager-force-systemd-flag-203169/kube-controller-manager" id=3e437f22-67c9-4ec5-b918-5d51a955d694 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:47:43 force-systemd-flag-203169 crio[843]: time="2025-10-06T19:47:43.848189827Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:47:43 force-systemd-flag-203169 crio[843]: time="2025-10-06T19:47:43.852687262Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:47:43 force-systemd-flag-203169 crio[843]: time="2025-10-06T19:47:43.853173587Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:47:43 force-systemd-flag-203169 crio[843]: time="2025-10-06T19:47:43.86416229Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=3e437f22-67c9-4ec5-b918-5d51a955d694 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:47:43 force-systemd-flag-203169 crio[843]: time="2025-10-06T19:47:43.86532883Z" level=info msg="createCtr: deleting container ID a758da12125519a53a1e58460627dd96fbdbf3ca7f27aea70e21a1c862088eaa from idIndex" id=3e437f22-67c9-4ec5-b918-5d51a955d694 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:47:43 force-systemd-flag-203169 crio[843]: time="2025-10-06T19:47:43.865369552Z" level=info msg="createCtr: removing container a758da12125519a53a1e58460627dd96fbdbf3ca7f27aea70e21a1c862088eaa" id=3e437f22-67c9-4ec5-b918-5d51a955d694 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:47:43 force-systemd-flag-203169 crio[843]: time="2025-10-06T19:47:43.865404909Z" level=info msg="createCtr: deleting container a758da12125519a53a1e58460627dd96fbdbf3ca7f27aea70e21a1c862088eaa from storage" id=3e437f22-67c9-4ec5-b918-5d51a955d694 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:47:43 force-systemd-flag-203169 crio[843]: time="2025-10-06T19:47:43.868419992Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-force-systemd-flag-203169_kube-system_388a93c92b306402e9711320717710e3_0" id=3e437f22-67c9-4ec5-b918-5d51a955d694 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:47:47 force-systemd-flag-203169 crio[843]: time="2025-10-06T19:47:47.846534997Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=e6040e4d-6110-4e49-88ca-dcb9fc916a17 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:47:47 force-systemd-flag-203169 crio[843]: time="2025-10-06T19:47:47.847380851Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=445784ba-f9e3-41f2-a89a-ee65fcd91300 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:47:47 force-systemd-flag-203169 crio[843]: time="2025-10-06T19:47:47.848341365Z" level=info msg="Creating container: kube-system/kube-scheduler-force-systemd-flag-203169/kube-scheduler" id=5176914c-2c34-4271-bf14-f7e5b3078e47 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:47:47 force-systemd-flag-203169 crio[843]: time="2025-10-06T19:47:47.8485724Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:47:47 force-systemd-flag-203169 crio[843]: time="2025-10-06T19:47:47.853167256Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:47:47 force-systemd-flag-203169 crio[843]: time="2025-10-06T19:47:47.853662672Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:47:47 force-systemd-flag-203169 crio[843]: time="2025-10-06T19:47:47.863866536Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=5176914c-2c34-4271-bf14-f7e5b3078e47 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:47:47 force-systemd-flag-203169 crio[843]: time="2025-10-06T19:47:47.865133352Z" level=info msg="createCtr: deleting container ID ffee69113eea6c25adb3bacc5ffef95b4ec6c0838a28a9bfba553bb9f0df7760 from idIndex" id=5176914c-2c34-4271-bf14-f7e5b3078e47 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:47:47 force-systemd-flag-203169 crio[843]: time="2025-10-06T19:47:47.865181024Z" level=info msg="createCtr: removing container ffee69113eea6c25adb3bacc5ffef95b4ec6c0838a28a9bfba553bb9f0df7760" id=5176914c-2c34-4271-bf14-f7e5b3078e47 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:47:47 force-systemd-flag-203169 crio[843]: time="2025-10-06T19:47:47.865218219Z" level=info msg="createCtr: deleting container ffee69113eea6c25adb3bacc5ffef95b4ec6c0838a28a9bfba553bb9f0df7760 from storage" id=5176914c-2c34-4271-bf14-f7e5b3078e47 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:47:47 force-systemd-flag-203169 crio[843]: time="2025-10-06T19:47:47.868118486Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-force-systemd-flag-203169_kube-system_5be40f1bedea36cfaaee58bffa301957_0" id=5176914c-2c34-4271-bf14-f7e5b3078e47 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 19:47:50.977375    2504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:47:50.978049    2504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:47:50.979664    2504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:47:50.980421    2504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:47:50.981999    2504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +3.608985] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:13] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:14] overlayfs: idmapped layers are currently not supported
	[ +11.752506] hrtimer: interrupt took 8273017 ns
	[Oct 6 19:16] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:20] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:21] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:23] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:26] overlayfs: idmapped layers are currently not supported
	[ +26.009516] overlayfs: idmapped layers are currently not supported
	[  +1.884868] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:28] overlayfs: idmapped layers are currently not supported
	[ +38.864395] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:29] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:30] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:31] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:35] overlayfs: idmapped layers are currently not supported
	[ +30.685217] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:37] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:39] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:41] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 19:47:51 up  1:30,  0 user,  load average: 0.10, 0.74, 1.49
	Linux force-systemd-flag-203169 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 19:47:47 force-systemd-flag-203169 kubelet[1787]: E1006 19:47:47.868446    1787 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 19:47:47 force-systemd-flag-203169 kubelet[1787]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 19:47:47 force-systemd-flag-203169 kubelet[1787]:  > podSandboxID="b8dd91b0756b8e3129dd60e2119ef5c0f74db426332a1f7baf5a69453737101b"
	Oct 06 19:47:47 force-systemd-flag-203169 kubelet[1787]: E1006 19:47:47.868596    1787 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 19:47:47 force-systemd-flag-203169 kubelet[1787]:         container kube-scheduler start failed in pod kube-scheduler-force-systemd-flag-203169_kube-system(5be40f1bedea36cfaaee58bffa301957): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 19:47:47 force-systemd-flag-203169 kubelet[1787]:  > logger="UnhandledError"
	Oct 06 19:47:47 force-systemd-flag-203169 kubelet[1787]: E1006 19:47:47.868639    1787 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-force-systemd-flag-203169" podUID="5be40f1bedea36cfaaee58bffa301957"
	Oct 06 19:47:48 force-systemd-flag-203169 kubelet[1787]: E1006 19:47:48.495860    1787 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.85.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.85.2:8443: connect: connection refused" event="&Event{ObjectMeta:{force-systemd-flag-203169.186bfe6e3f86088b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:force-systemd-flag-203169,UID:force-systemd-flag-203169,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node force-systemd-flag-203169 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:force-systemd-flag-203169,},FirstTimestamp:2025-10-06 19:43:48.883916939 +0000 UTC m=+0.807523771,LastTimestamp:2025-10-06 19:43:48.883916939 +0000 UTC m=+0.807523771,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:k
ubelet,ReportingInstance:force-systemd-flag-203169,}"
	Oct 06 19:47:48 force-systemd-flag-203169 kubelet[1787]: E1006 19:47:48.914488    1787 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"force-systemd-flag-203169\" not found"
	Oct 06 19:47:50 force-systemd-flag-203169 kubelet[1787]: E1006 19:47:50.846500    1787 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-flag-203169\" not found" node="force-systemd-flag-203169"
	Oct 06 19:47:50 force-systemd-flag-203169 kubelet[1787]: E1006 19:47:50.846965    1787 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-flag-203169\" not found" node="force-systemd-flag-203169"
	Oct 06 19:47:50 force-systemd-flag-203169 kubelet[1787]: E1006 19:47:50.910254    1787 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 19:47:50 force-systemd-flag-203169 kubelet[1787]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 19:47:50 force-systemd-flag-203169 kubelet[1787]:  > podSandboxID="201e4edb5dc9de6c75b2e7205c1799070163cb85cb71271f39406ae276e3cd3d"
	Oct 06 19:47:50 force-systemd-flag-203169 kubelet[1787]: E1006 19:47:50.910356    1787 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 19:47:50 force-systemd-flag-203169 kubelet[1787]:         container etcd start failed in pod etcd-force-systemd-flag-203169_kube-system(04002c091966ad10d5f9010a2b4f5ddd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 19:47:50 force-systemd-flag-203169 kubelet[1787]:  > logger="UnhandledError"
	Oct 06 19:47:50 force-systemd-flag-203169 kubelet[1787]: E1006 19:47:50.910388    1787 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-force-systemd-flag-203169" podUID="04002c091966ad10d5f9010a2b4f5ddd"
	Oct 06 19:47:50 force-systemd-flag-203169 kubelet[1787]: E1006 19:47:50.912587    1787 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 19:47:50 force-systemd-flag-203169 kubelet[1787]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 19:47:50 force-systemd-flag-203169 kubelet[1787]:  > podSandboxID="8da077272138f9b5496d48a42e3e5fb81ba519c3410ffb74cf64a1dc990353a5"
	Oct 06 19:47:50 force-systemd-flag-203169 kubelet[1787]: E1006 19:47:50.912675    1787 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 19:47:50 force-systemd-flag-203169 kubelet[1787]:         container kube-apiserver start failed in pod kube-apiserver-force-systemd-flag-203169_kube-system(3e8f5c700205c8e500ce6be063883992): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 19:47:50 force-systemd-flag-203169 kubelet[1787]:  > logger="UnhandledError"
	Oct 06 19:47:50 force-systemd-flag-203169 kubelet[1787]: E1006 19:47:50.912704    1787 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-force-systemd-flag-203169" podUID="3e8f5c700205c8e500ce6be063883992"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-203169 -n force-systemd-flag-203169
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-203169 -n force-systemd-flag-203169: exit status 6 (332.655075ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 19:47:51.452166  178414 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-203169" does not appear in /home/jenkins/minikube-integration/21701-2540/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "force-systemd-flag-203169" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-203169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-203169
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-203169: (1.947015829s)
--- FAIL: TestForceSystemdFlag (513.67s)

                                                
                                    
x
+
TestForceSystemdEnv (512.93s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-760371 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1006 19:40:59.511818    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:42:56.447779    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:45:48.450124    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-760371 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 80 (8m29.409412102s)

                                                
                                                
-- stdout --
	* [force-systemd-env-760371] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-760371" primary control-plane node in "force-systemd-env-760371" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 19:40:51.797040  174295 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:40:51.797217  174295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:40:51.797230  174295 out.go:374] Setting ErrFile to fd 2...
	I1006 19:40:51.797236  174295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:40:51.797518  174295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:40:51.797980  174295 out.go:368] Setting JSON to false
	I1006 19:40:51.798903  174295 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4987,"bootTime":1759774665,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1006 19:40:51.798971  174295 start.go:140] virtualization:  
	I1006 19:40:51.802558  174295 out.go:179] * [force-systemd-env-760371] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 19:40:51.805738  174295 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 19:40:51.805811  174295 notify.go:220] Checking for updates...
	I1006 19:40:51.811990  174295 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 19:40:51.815011  174295 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:40:51.817888  174295 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	I1006 19:40:51.820795  174295 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 19:40:51.823663  174295 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1006 19:40:51.827058  174295 config.go:182] Loaded profile config "force-systemd-flag-203169": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:40:51.827168  174295 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 19:40:51.860116  174295 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 19:40:51.860236  174295 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:40:51.922437  174295 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:40:51.913312997 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:40:51.922562  174295 docker.go:318] overlay module found
	I1006 19:40:51.925709  174295 out.go:179] * Using the docker driver based on user configuration
	I1006 19:40:51.928544  174295 start.go:304] selected driver: docker
	I1006 19:40:51.928564  174295 start.go:924] validating driver "docker" against <nil>
	I1006 19:40:51.928577  174295 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 19:40:51.929331  174295 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:40:51.983563  174295 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:40:51.973907642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:40:51.983888  174295 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 19:40:51.984141  174295 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1006 19:40:51.987270  174295 out.go:179] * Using Docker driver with root privileges
	I1006 19:40:51.990162  174295 cni.go:84] Creating CNI manager for ""
	I1006 19:40:51.990244  174295 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:40:51.990259  174295 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 19:40:51.990349  174295 start.go:348] cluster config:
	{Name:force-systemd-env-760371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-760371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:40:51.993569  174295 out.go:179] * Starting "force-systemd-env-760371" primary control-plane node in "force-systemd-env-760371" cluster
	I1006 19:40:51.996529  174295 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 19:40:51.999434  174295 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 19:40:52.002305  174295 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:40:52.002353  174295 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 19:40:52.002376  174295 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1006 19:40:52.002394  174295 cache.go:58] Caching tarball of preloaded images
	I1006 19:40:52.002486  174295 preload.go:233] Found /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1006 19:40:52.002495  174295 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 19:40:52.002614  174295 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/config.json ...
	I1006 19:40:52.002645  174295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/config.json: {Name:mka31f5862185485bf03db99e8df838b3a1c83e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:40:52.031927  174295 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 19:40:52.031953  174295 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 19:40:52.031982  174295 cache.go:232] Successfully downloaded all kic artifacts
	I1006 19:40:52.032007  174295 start.go:360] acquireMachinesLock for force-systemd-env-760371: {Name:mk3287ebe7916dc03109d9ffe39570f41d010e75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 19:40:52.032126  174295 start.go:364] duration metric: took 96.714µs to acquireMachinesLock for "force-systemd-env-760371"
	I1006 19:40:52.032159  174295 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-760371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-760371 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 19:40:52.032230  174295 start.go:125] createHost starting for "" (driver="docker")
	I1006 19:40:52.035692  174295 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1006 19:40:52.035965  174295 start.go:159] libmachine.API.Create for "force-systemd-env-760371" (driver="docker")
	I1006 19:40:52.036013  174295 client.go:168] LocalClient.Create starting
	I1006 19:40:52.036093  174295 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem
	I1006 19:40:52.036138  174295 main.go:141] libmachine: Decoding PEM data...
	I1006 19:40:52.036164  174295 main.go:141] libmachine: Parsing certificate...
	I1006 19:40:52.036228  174295 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem
	I1006 19:40:52.036251  174295 main.go:141] libmachine: Decoding PEM data...
	I1006 19:40:52.036267  174295 main.go:141] libmachine: Parsing certificate...
	I1006 19:40:52.036675  174295 cli_runner.go:164] Run: docker network inspect force-systemd-env-760371 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 19:40:52.054537  174295 cli_runner.go:211] docker network inspect force-systemd-env-760371 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 19:40:52.054631  174295 network_create.go:284] running [docker network inspect force-systemd-env-760371] to gather additional debugging logs...
	I1006 19:40:52.054649  174295 cli_runner.go:164] Run: docker network inspect force-systemd-env-760371
	W1006 19:40:52.071663  174295 cli_runner.go:211] docker network inspect force-systemd-env-760371 returned with exit code 1
	I1006 19:40:52.071692  174295 network_create.go:287] error running [docker network inspect force-systemd-env-760371]: docker network inspect force-systemd-env-760371: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-760371 not found
	I1006 19:40:52.071715  174295 network_create.go:289] output of [docker network inspect force-systemd-env-760371]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-760371 not found
	
	** /stderr **
	I1006 19:40:52.071861  174295 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:40:52.089269  174295 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7058eae896da IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:57:c0:bd:de:1b} reservation:<nil>}
	I1006 19:40:52.089609  174295 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e321ce8cf6dc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:12:e6:76:87:fe:bb} reservation:<nil>}
	I1006 19:40:52.089919  174295 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-17bdbd7bd7be IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:22:93:f3:19:55:10} reservation:<nil>}
	I1006 19:40:52.090359  174295 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a16900}
	I1006 19:40:52.090382  174295 network_create.go:124] attempt to create docker network force-systemd-env-760371 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1006 19:40:52.090445  174295 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-760371 force-systemd-env-760371
	I1006 19:40:52.156058  174295 network_create.go:108] docker network force-systemd-env-760371 192.168.76.0/24 created
	I1006 19:40:52.156089  174295 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-760371" container
	I1006 19:40:52.156160  174295 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 19:40:52.172990  174295 cli_runner.go:164] Run: docker volume create force-systemd-env-760371 --label name.minikube.sigs.k8s.io=force-systemd-env-760371 --label created_by.minikube.sigs.k8s.io=true
	I1006 19:40:52.191801  174295 oci.go:103] Successfully created a docker volume force-systemd-env-760371
	I1006 19:40:52.191893  174295 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-760371-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-760371 --entrypoint /usr/bin/test -v force-systemd-env-760371:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 19:40:52.716336  174295 oci.go:107] Successfully prepared a docker volume force-systemd-env-760371
	I1006 19:40:52.716385  174295 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:40:52.716405  174295 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 19:40:52.716480  174295 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-760371:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 19:40:57.156119  174295 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-760371:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.439598228s)
	I1006 19:40:57.156150  174295 kic.go:203] duration metric: took 4.439741319s to extract preloaded images to volume ...
	W1006 19:40:57.156301  174295 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1006 19:40:57.156416  174295 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 19:40:57.206891  174295 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-760371 --name force-systemd-env-760371 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-760371 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-760371 --network force-systemd-env-760371 --ip 192.168.76.2 --volume force-systemd-env-760371:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 19:40:57.520934  174295 cli_runner.go:164] Run: docker container inspect force-systemd-env-760371 --format={{.State.Running}}
	I1006 19:40:57.552151  174295 cli_runner.go:164] Run: docker container inspect force-systemd-env-760371 --format={{.State.Status}}
	I1006 19:40:57.574692  174295 cli_runner.go:164] Run: docker exec force-systemd-env-760371 stat /var/lib/dpkg/alternatives/iptables
	I1006 19:40:57.622089  174295 oci.go:144] the created container "force-systemd-env-760371" has a running status.
	I1006 19:40:57.622131  174295 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/force-systemd-env-760371/id_rsa...
	I1006 19:40:59.583107  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/force-systemd-env-760371/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1006 19:40:59.583203  174295 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-2540/.minikube/machines/force-systemd-env-760371/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 19:40:59.604321  174295 cli_runner.go:164] Run: docker container inspect force-systemd-env-760371 --format={{.State.Status}}
	I1006 19:40:59.625126  174295 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 19:40:59.625145  174295 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-760371 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 19:40:59.673877  174295 cli_runner.go:164] Run: docker container inspect force-systemd-env-760371 --format={{.State.Status}}
	I1006 19:40:59.692877  174295 machine.go:93] provisionDockerMachine start ...
	I1006 19:40:59.692967  174295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-760371
	I1006 19:40:59.711656  174295 main.go:141] libmachine: Using SSH client type: native
	I1006 19:40:59.712023  174295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33035 <nil> <nil>}
	I1006 19:40:59.712040  174295 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 19:40:59.847484  174295 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-760371
	
	I1006 19:40:59.847510  174295 ubuntu.go:182] provisioning hostname "force-systemd-env-760371"
	I1006 19:40:59.847601  174295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-760371
	I1006 19:40:59.867121  174295 main.go:141] libmachine: Using SSH client type: native
	I1006 19:40:59.867449  174295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33035 <nil> <nil>}
	I1006 19:40:59.867465  174295 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-760371 && echo "force-systemd-env-760371" | sudo tee /etc/hostname
	I1006 19:41:00.038864  174295 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-760371
	
	I1006 19:41:00.038995  174295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-760371
	I1006 19:41:00.132980  174295 main.go:141] libmachine: Using SSH client type: native
	I1006 19:41:00.133301  174295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33035 <nil> <nil>}
	I1006 19:41:00.133319  174295 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-760371' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-760371/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-760371' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 19:41:00.464782  174295 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 19:41:00.464813  174295 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-2540/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-2540/.minikube}
	I1006 19:41:00.464836  174295 ubuntu.go:190] setting up certificates
	I1006 19:41:00.464846  174295 provision.go:84] configureAuth start
	I1006 19:41:00.464916  174295 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-760371
	I1006 19:41:00.485780  174295 provision.go:143] copyHostCerts
	I1006 19:41:00.485844  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem
	I1006 19:41:00.485887  174295 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem, removing ...
	I1006 19:41:00.485905  174295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem
	I1006 19:41:00.485995  174295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem (1082 bytes)
	I1006 19:41:00.486107  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem
	I1006 19:41:00.486132  174295 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem, removing ...
	I1006 19:41:00.486138  174295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem
	I1006 19:41:00.486168  174295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem (1123 bytes)
	I1006 19:41:00.486227  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem
	I1006 19:41:00.486247  174295 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem, removing ...
	I1006 19:41:00.486252  174295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem
	I1006 19:41:00.486290  174295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem (1675 bytes)
	I1006 19:41:00.486368  174295 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-760371 san=[127.0.0.1 192.168.76.2 force-systemd-env-760371 localhost minikube]
	I1006 19:41:01.528666  174295 provision.go:177] copyRemoteCerts
	I1006 19:41:01.528734  174295 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 19:41:01.528772  174295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-760371
	I1006 19:41:01.546025  174295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33035 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/force-systemd-env-760371/id_rsa Username:docker}
	I1006 19:41:01.643653  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 19:41:01.643740  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 19:41:01.663314  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 19:41:01.663444  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1006 19:41:01.682495  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 19:41:01.682576  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 19:41:01.700705  174295 provision.go:87] duration metric: took 1.235845158s to configureAuth
	I1006 19:41:01.700743  174295 ubuntu.go:206] setting minikube options for container-runtime
	I1006 19:41:01.700919  174295 config.go:182] Loaded profile config "force-systemd-env-760371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:41:01.701015  174295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-760371
	I1006 19:41:01.718704  174295 main.go:141] libmachine: Using SSH client type: native
	I1006 19:41:01.719024  174295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33035 <nil> <nil>}
	I1006 19:41:01.719045  174295 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 19:41:01.965441  174295 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 19:41:01.965513  174295 machine.go:96] duration metric: took 2.272604241s to provisionDockerMachine
	I1006 19:41:01.965538  174295 client.go:171] duration metric: took 9.929513108s to LocalClient.Create
	I1006 19:41:01.965585  174295 start.go:167] duration metric: took 9.929621252s to libmachine.API.Create "force-systemd-env-760371"
	I1006 19:41:01.965609  174295 start.go:293] postStartSetup for "force-systemd-env-760371" (driver="docker")
	I1006 19:41:01.965631  174295 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 19:41:01.965717  174295 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 19:41:01.965783  174295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-760371
	I1006 19:41:01.983293  174295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33035 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/force-systemd-env-760371/id_rsa Username:docker}
	I1006 19:41:02.084270  174295 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 19:41:02.087759  174295 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 19:41:02.087834  174295 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 19:41:02.087851  174295 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/addons for local assets ...
	I1006 19:41:02.087905  174295 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/files for local assets ...
	I1006 19:41:02.088001  174295 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> 43502.pem in /etc/ssl/certs
	I1006 19:41:02.088013  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> /etc/ssl/certs/43502.pem
	I1006 19:41:02.088112  174295 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 19:41:02.095840  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:41:02.114882  174295 start.go:296] duration metric: took 149.246191ms for postStartSetup
	I1006 19:41:02.115392  174295 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-760371
	I1006 19:41:02.132467  174295 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/config.json ...
	I1006 19:41:02.132769  174295 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 19:41:02.132819  174295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-760371
	I1006 19:41:02.149974  174295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33035 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/force-systemd-env-760371/id_rsa Username:docker}
	I1006 19:41:02.244932  174295 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 19:41:02.249599  174295 start.go:128] duration metric: took 10.217354455s to createHost
	I1006 19:41:02.249623  174295 start.go:83] releasing machines lock for "force-systemd-env-760371", held for 10.2174823s
	I1006 19:41:02.249714  174295 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-760371
	I1006 19:41:02.272838  174295 ssh_runner.go:195] Run: cat /version.json
	I1006 19:41:02.272900  174295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-760371
	I1006 19:41:02.273189  174295 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 19:41:02.273253  174295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-760371
	I1006 19:41:02.294414  174295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33035 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/force-systemd-env-760371/id_rsa Username:docker}
	I1006 19:41:02.305374  174295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33035 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/force-systemd-env-760371/id_rsa Username:docker}
	I1006 19:41:02.391513  174295 ssh_runner.go:195] Run: systemctl --version
	I1006 19:41:02.481923  174295 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 19:41:02.520709  174295 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 19:41:02.525156  174295 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 19:41:02.525269  174295 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 19:41:02.553233  174295 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1006 19:41:02.553257  174295 start.go:495] detecting cgroup driver to use...
	I1006 19:41:02.553273  174295 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1006 19:41:02.553329  174295 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 19:41:02.570614  174295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 19:41:02.583481  174295 docker.go:218] disabling cri-docker service (if available) ...
	I1006 19:41:02.583543  174295 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 19:41:02.601088  174295 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 19:41:02.620821  174295 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 19:41:02.737845  174295 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 19:41:02.867493  174295 docker.go:234] disabling docker service ...
	I1006 19:41:02.867572  174295 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 19:41:02.888701  174295 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 19:41:02.902478  174295 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 19:41:03.028067  174295 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 19:41:03.142206  174295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 19:41:03.154863  174295 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 19:41:03.170299  174295 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 19:41:03.170385  174295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:41:03.179993  174295 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1006 19:41:03.180106  174295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:41:03.190105  174295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:41:03.199593  174295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:41:03.209050  174295 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 19:41:03.217073  174295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:41:03.226848  174295 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:41:03.240344  174295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:41:03.249485  174295 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 19:41:03.257053  174295 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 19:41:03.264276  174295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:41:03.370002  174295 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 19:41:03.509936  174295 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 19:41:03.510053  174295 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 19:41:03.513874  174295 start.go:563] Will wait 60s for crictl version
	I1006 19:41:03.513987  174295 ssh_runner.go:195] Run: which crictl
	I1006 19:41:03.517503  174295 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 19:41:03.542274  174295 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 19:41:03.542444  174295 ssh_runner.go:195] Run: crio --version
	I1006 19:41:03.573323  174295 ssh_runner.go:195] Run: crio --version
	I1006 19:41:03.606570  174295 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 19:41:03.609473  174295 cli_runner.go:164] Run: docker network inspect force-systemd-env-760371 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:41:03.625527  174295 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1006 19:41:03.629485  174295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:41:03.640036  174295 kubeadm.go:883] updating cluster {Name:force-systemd-env-760371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-760371 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 19:41:03.640151  174295 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:41:03.640213  174295 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:41:03.674093  174295 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:41:03.674116  174295 crio.go:433] Images already preloaded, skipping extraction
	I1006 19:41:03.674169  174295 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:41:03.698815  174295 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:41:03.698837  174295 cache_images.go:85] Images are preloaded, skipping loading
	I1006 19:41:03.698845  174295 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1006 19:41:03.698942  174295 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-760371 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-760371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 19:41:03.699032  174295 ssh_runner.go:195] Run: crio config
	I1006 19:41:03.766351  174295 cni.go:84] Creating CNI manager for ""
	I1006 19:41:03.766375  174295 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:41:03.766390  174295 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 19:41:03.766416  174295 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-760371 NodeName:force-systemd-env-760371 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 19:41:03.766621  174295 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-760371"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 19:41:03.766713  174295 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 19:41:03.774492  174295 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 19:41:03.774579  174295 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 19:41:03.782430  174295 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1006 19:41:03.795739  174295 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 19:41:03.808673  174295 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1006 19:41:03.822858  174295 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1006 19:41:03.826601  174295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:41:03.836682  174295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:41:03.960387  174295 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:41:03.976893  174295 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371 for IP: 192.168.76.2
	I1006 19:41:03.976912  174295 certs.go:195] generating shared ca certs ...
	I1006 19:41:03.976928  174295 certs.go:227] acquiring lock for ca certs: {Name:mke29bef1b13829052576090d66d8864f7cbc64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:41:03.977062  174295 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key
	I1006 19:41:03.977109  174295 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key
	I1006 19:41:03.977121  174295 certs.go:257] generating profile certs ...
	I1006 19:41:03.977183  174295 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/client.key
	I1006 19:41:03.977198  174295 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/client.crt with IP's: []
	I1006 19:41:05.086387  174295 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/client.crt ...
	I1006 19:41:05.086423  174295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/client.crt: {Name:mk4601cd87b01add89161db4ec97c2390e11c2fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:41:05.086623  174295 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/client.key ...
	I1006 19:41:05.086640  174295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/client.key: {Name:mkaac636eddfa1ad2ecfe724a4295502dd613c01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:41:05.086734  174295 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/apiserver.key.1944f0cc
	I1006 19:41:05.086751  174295 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/apiserver.crt.1944f0cc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1006 19:41:05.650961  174295 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/apiserver.crt.1944f0cc ...
	I1006 19:41:05.650992  174295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/apiserver.crt.1944f0cc: {Name:mk943e655c584b929141ef2fe7f12923c8e0fa73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:41:05.651174  174295 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/apiserver.key.1944f0cc ...
	I1006 19:41:05.651188  174295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/apiserver.key.1944f0cc: {Name:mk860322bda7fa95f3ba379abadc4bd168011eae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:41:05.651276  174295 certs.go:382] copying /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/apiserver.crt.1944f0cc -> /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/apiserver.crt
	I1006 19:41:05.651355  174295 certs.go:386] copying /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/apiserver.key.1944f0cc -> /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/apiserver.key
	I1006 19:41:05.651415  174295 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/proxy-client.key
	I1006 19:41:05.651432  174295 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/proxy-client.crt with IP's: []
	I1006 19:41:06.233074  174295 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/proxy-client.crt ...
	I1006 19:41:06.233105  174295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/proxy-client.crt: {Name:mkc7fd3c3aa33713543f14b4493f54b98b3b84f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:41:06.233276  174295 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/proxy-client.key ...
	I1006 19:41:06.233292  174295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/proxy-client.key: {Name:mk7b7f5c8dcf06fddd86deab74c50754274e64ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:41:06.233370  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 19:41:06.233391  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 19:41:06.233404  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 19:41:06.233420  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 19:41:06.233433  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 19:41:06.233451  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 19:41:06.233467  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 19:41:06.233482  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 19:41:06.233549  174295 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem (1338 bytes)
	W1006 19:41:06.233590  174295 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350_empty.pem, impossibly tiny 0 bytes
	I1006 19:41:06.233603  174295 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 19:41:06.233632  174295 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem (1082 bytes)
	I1006 19:41:06.233655  174295 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem (1123 bytes)
	I1006 19:41:06.233684  174295 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem (1675 bytes)
	I1006 19:41:06.233731  174295 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:41:06.233762  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> /usr/share/ca-certificates/43502.pem
	I1006 19:41:06.233775  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:41:06.233785  174295 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem -> /usr/share/ca-certificates/4350.pem
	I1006 19:41:06.234402  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 19:41:06.254144  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 19:41:06.273819  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 19:41:06.297082  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 19:41:06.314647  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1006 19:41:06.331886  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1006 19:41:06.349004  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 19:41:06.367448  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/force-systemd-env-760371/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 19:41:06.384819  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /usr/share/ca-certificates/43502.pem (1708 bytes)
	I1006 19:41:06.402292  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 19:41:06.421555  174295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem --> /usr/share/ca-certificates/4350.pem (1338 bytes)
	I1006 19:41:06.438947  174295 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 19:41:06.452527  174295 ssh_runner.go:195] Run: openssl version
	I1006 19:41:06.458967  174295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 19:41:06.467626  174295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:41:06.471331  174295 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:41:06.471403  174295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:41:06.512784  174295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 19:41:06.521283  174295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4350.pem && ln -fs /usr/share/ca-certificates/4350.pem /etc/ssl/certs/4350.pem"
	I1006 19:41:06.529716  174295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4350.pem
	I1006 19:41:06.533562  174295 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 18:49 /usr/share/ca-certificates/4350.pem
	I1006 19:41:06.533625  174295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4350.pem
	I1006 19:41:06.575558  174295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4350.pem /etc/ssl/certs/51391683.0"
	I1006 19:41:06.584945  174295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43502.pem && ln -fs /usr/share/ca-certificates/43502.pem /etc/ssl/certs/43502.pem"
	I1006 19:41:06.593816  174295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43502.pem
	I1006 19:41:06.598167  174295 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 18:49 /usr/share/ca-certificates/43502.pem
	I1006 19:41:06.598335  174295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43502.pem
	I1006 19:41:06.645983  174295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43502.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 19:41:06.655833  174295 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 19:41:06.659558  174295 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 19:41:06.659610  174295 kubeadm.go:400] StartCluster: {Name:force-systemd-env-760371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-760371 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:41:06.659757  174295 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 19:41:06.659823  174295 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 19:41:06.686617  174295 cri.go:89] found id: ""
	I1006 19:41:06.686702  174295 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 19:41:06.694762  174295 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 19:41:06.702594  174295 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 19:41:06.702685  174295 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 19:41:06.710811  174295 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 19:41:06.710832  174295 kubeadm.go:157] found existing configuration files:
	
	I1006 19:41:06.710888  174295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 19:41:06.718787  174295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 19:41:06.718877  174295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 19:41:06.726392  174295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 19:41:06.734171  174295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 19:41:06.734307  174295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 19:41:06.741860  174295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 19:41:06.749926  174295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 19:41:06.750011  174295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 19:41:06.757695  174295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 19:41:06.765687  174295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 19:41:06.765755  174295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 19:41:06.773349  174295 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 19:41:06.814039  174295 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 19:41:06.814112  174295 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 19:41:06.840097  174295 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 19:41:06.840176  174295 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1006 19:41:06.840219  174295 kubeadm.go:318] OS: Linux
	I1006 19:41:06.840271  174295 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 19:41:06.840327  174295 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1006 19:41:06.840380  174295 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 19:41:06.840435  174295 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 19:41:06.840489  174295 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 19:41:06.840544  174295 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 19:41:06.840596  174295 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 19:41:06.840650  174295 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 19:41:06.840703  174295 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1006 19:41:06.911603  174295 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 19:41:06.911748  174295 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 19:41:06.911846  174295 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 19:41:06.924221  174295 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 19:41:06.927509  174295 out.go:252]   - Generating certificates and keys ...
	I1006 19:41:06.927651  174295 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 19:41:06.927776  174295 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 19:41:07.110656  174295 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 19:41:07.617959  174295 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 19:41:07.935778  174295 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 19:41:08.084012  174295 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 19:41:08.503643  174295 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 19:41:08.503818  174295 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-760371 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1006 19:41:08.549707  174295 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 19:41:08.550020  174295 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-760371 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1006 19:41:08.741675  174295 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 19:41:09.663097  174295 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 19:41:10.822315  174295 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 19:41:10.822845  174295 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 19:41:11.186368  174295 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 19:41:11.993890  174295 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 19:41:12.587160  174295 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 19:41:13.048610  174295 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 19:41:13.393251  174295 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 19:41:13.394054  174295 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 19:41:13.397031  174295 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 19:41:13.400489  174295 out.go:252]   - Booting up control plane ...
	I1006 19:41:13.400611  174295 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 19:41:13.400702  174295 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 19:41:13.402389  174295 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 19:41:13.422255  174295 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 19:41:13.422371  174295 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 19:41:13.429765  174295 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 19:41:13.430143  174295 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 19:41:13.430444  174295 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 19:41:13.557208  174295 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 19:41:13.557336  174295 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 19:41:14.560096  174295 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001807913s
	I1006 19:41:14.562623  174295 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 19:41:14.562725  174295 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1006 19:41:14.562853  174295 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 19:41:14.562942  174295 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 19:45:14.563760  174295 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000821405s
	I1006 19:45:14.563949  174295 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000589905s
	I1006 19:45:14.565243  174295 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.002667498s
	I1006 19:45:14.565259  174295 kubeadm.go:318] 
	I1006 19:45:14.565358  174295 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 19:45:14.565459  174295 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 19:45:14.565567  174295 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 19:45:14.565856  174295 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 19:45:14.565949  174295 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 19:45:14.566034  174295 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 19:45:14.566043  174295 kubeadm.go:318] 
	I1006 19:45:14.570253  174295 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1006 19:45:14.570586  174295 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1006 19:45:14.570735  174295 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 19:45:14.571379  174295 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1006 19:45:14.571532  174295 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1006 19:45:14.571600  174295 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-760371 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-760371 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001807913s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000821405s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000589905s
	[control-plane-check] kube-scheduler is not healthy after 4m0.002667498s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-760371 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-760371 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001807913s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000821405s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000589905s
	[control-plane-check] kube-scheduler is not healthy after 4m0.002667498s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1006 19:45:14.571686  174295 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1006 19:45:15.135509  174295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:45:15.149786  174295 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 19:45:15.149854  174295 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 19:45:15.157922  174295 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 19:45:15.157949  174295 kubeadm.go:157] found existing configuration files:
	
	I1006 19:45:15.158017  174295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 19:45:15.166151  174295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 19:45:15.166223  174295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 19:45:15.174347  174295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 19:45:15.182930  174295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 19:45:15.182996  174295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 19:45:15.190988  174295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 19:45:15.199110  174295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 19:45:15.199175  174295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 19:45:15.206901  174295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 19:45:15.214652  174295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 19:45:15.214740  174295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 19:45:15.222694  174295 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 19:45:15.266638  174295 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 19:45:15.266887  174295 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 19:45:15.290228  174295 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 19:45:15.290379  174295 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1006 19:45:15.290455  174295 kubeadm.go:318] OS: Linux
	I1006 19:45:15.290543  174295 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 19:45:15.290636  174295 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1006 19:45:15.290720  174295 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 19:45:15.290802  174295 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 19:45:15.290885  174295 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 19:45:15.290974  174295 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 19:45:15.291051  174295 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 19:45:15.291132  174295 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 19:45:15.291213  174295 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1006 19:45:15.362442  174295 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 19:45:15.362589  174295 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 19:45:15.362719  174295 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 19:45:15.369346  174295 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 19:45:15.376435  174295 out.go:252]   - Generating certificates and keys ...
	I1006 19:45:15.376561  174295 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 19:45:15.376650  174295 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 19:45:15.376745  174295 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1006 19:45:15.376822  174295 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1006 19:45:15.376921  174295 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1006 19:45:15.377019  174295 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1006 19:45:15.377134  174295 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1006 19:45:15.377237  174295 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1006 19:45:15.377357  174295 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1006 19:45:15.377465  174295 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1006 19:45:15.377531  174295 kubeadm.go:318] [certs] Using the existing "sa" key
	I1006 19:45:15.377615  174295 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 19:45:15.845806  174295 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 19:45:16.232193  174295 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 19:45:17.984854  174295 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 19:45:18.461357  174295 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 19:45:18.946485  174295 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 19:45:18.947350  174295 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 19:45:18.950114  174295 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 19:45:18.954413  174295 out.go:252]   - Booting up control plane ...
	I1006 19:45:18.954514  174295 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 19:45:18.954592  174295 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 19:45:18.954661  174295 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 19:45:18.971046  174295 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 19:45:18.971186  174295 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 19:45:18.979065  174295 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 19:45:18.979554  174295 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 19:45:18.979643  174295 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 19:45:19.133097  174295 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 19:45:19.133219  174295 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 19:45:20.633601  174295 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500707292s
	I1006 19:45:20.636956  174295 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 19:45:20.637048  174295 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1006 19:45:20.637371  174295 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 19:45:20.637460  174295 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 19:49:20.637518  174295 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000395394s
	I1006 19:49:20.637975  174295 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000478989s
	I1006 19:49:20.639951  174295 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001315194s
	I1006 19:49:20.639974  174295 kubeadm.go:318] 
	I1006 19:49:20.640345  174295 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 19:49:20.640497  174295 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 19:49:20.640658  174295 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 19:49:20.640831  174295 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 19:49:20.641258  174295 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 19:49:20.641412  174295 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 19:49:20.641418  174295 kubeadm.go:318] 
	I1006 19:49:20.646551  174295 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1006 19:49:20.646889  174295 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1006 19:49:20.647051  174295 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 19:49:20.647734  174295 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1006 19:49:20.647868  174295 kubeadm.go:402] duration metric: took 8m13.988260704s to StartCluster
	I1006 19:49:20.647918  174295 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 19:49:20.647920  174295 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 19:49:20.648072  174295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 19:49:20.674405  174295 cri.go:89] found id: ""
	I1006 19:49:20.674448  174295 logs.go:282] 0 containers: []
	W1006 19:49:20.674457  174295 logs.go:284] No container was found matching "kube-apiserver"
	I1006 19:49:20.674464  174295 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 19:49:20.674544  174295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 19:49:20.701758  174295 cri.go:89] found id: ""
	I1006 19:49:20.701781  174295 logs.go:282] 0 containers: []
	W1006 19:49:20.701790  174295 logs.go:284] No container was found matching "etcd"
	I1006 19:49:20.701796  174295 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 19:49:20.701873  174295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 19:49:20.729034  174295 cri.go:89] found id: ""
	I1006 19:49:20.729107  174295 logs.go:282] 0 containers: []
	W1006 19:49:20.729122  174295 logs.go:284] No container was found matching "coredns"
	I1006 19:49:20.729129  174295 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 19:49:20.729187  174295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 19:49:20.759630  174295 cri.go:89] found id: ""
	I1006 19:49:20.759656  174295 logs.go:282] 0 containers: []
	W1006 19:49:20.759664  174295 logs.go:284] No container was found matching "kube-scheduler"
	I1006 19:49:20.759671  174295 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 19:49:20.759754  174295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 19:49:20.785020  174295 cri.go:89] found id: ""
	I1006 19:49:20.785043  174295 logs.go:282] 0 containers: []
	W1006 19:49:20.785052  174295 logs.go:284] No container was found matching "kube-proxy"
	I1006 19:49:20.785058  174295 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 19:49:20.785149  174295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 19:49:20.811442  174295 cri.go:89] found id: ""
	I1006 19:49:20.811467  174295 logs.go:282] 0 containers: []
	W1006 19:49:20.811475  174295 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 19:49:20.811482  174295 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 19:49:20.811558  174295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 19:49:20.842643  174295 cri.go:89] found id: ""
	I1006 19:49:20.842668  174295 logs.go:282] 0 containers: []
	W1006 19:49:20.842680  174295 logs.go:284] No container was found matching "kindnet"
	I1006 19:49:20.842711  174295 logs.go:123] Gathering logs for dmesg ...
	I1006 19:49:20.842728  174295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 19:49:20.857508  174295 logs.go:123] Gathering logs for describe nodes ...
	I1006 19:49:20.857535  174295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 19:49:20.928425  174295 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 19:49:20.919545    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:49:20.920762    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:49:20.921563    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:49:20.922756    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:49:20.924346    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 19:49:20.919545    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:49:20.920762    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:49:20.921563    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:49:20.922756    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:49:20.924346    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 19:49:20.928448  174295 logs.go:123] Gathering logs for CRI-O ...
	I1006 19:49:20.928460  174295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 19:49:21.006753  174295 logs.go:123] Gathering logs for container status ...
	I1006 19:49:21.006787  174295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 19:49:21.041402  174295 logs.go:123] Gathering logs for kubelet ...
	I1006 19:49:21.041429  174295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1006 19:49:21.131368  174295 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500707292s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000395394s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000478989s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001315194s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1006 19:49:21.131428  174295 out.go:285] * 
	* 
	W1006 19:49:21.131481  174295 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500707292s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000395394s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000478989s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001315194s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500707292s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000395394s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000478989s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001315194s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 19:49:21.131497  174295 out.go:285] * 
	* 
	W1006 19:49:21.133691  174295 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 19:49:21.139580  174295 out.go:203] 
	W1006 19:49:21.143341  174295 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500707292s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000395394s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000478989s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001315194s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500707292s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000395394s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000478989s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001315194s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 19:49:21.143384  174295 out.go:285] * 
	* 
	I1006 19:49:21.146946  174295 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-760371 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2025-10-06 19:49:21.194157596 +0000 UTC m=+4070.713224684
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect force-systemd-env-760371
helpers_test.go:243: (dbg) docker inspect force-systemd-env-760371:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1a4745ab8b8b5e028273856b28cc93268bfab192f42f6b9aced306d02985be10",
	        "Created": "2025-10-06T19:40:57.222373172Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 174698,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T19:40:57.306583405Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/1a4745ab8b8b5e028273856b28cc93268bfab192f42f6b9aced306d02985be10/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1a4745ab8b8b5e028273856b28cc93268bfab192f42f6b9aced306d02985be10/hostname",
	        "HostsPath": "/var/lib/docker/containers/1a4745ab8b8b5e028273856b28cc93268bfab192f42f6b9aced306d02985be10/hosts",
	        "LogPath": "/var/lib/docker/containers/1a4745ab8b8b5e028273856b28cc93268bfab192f42f6b9aced306d02985be10/1a4745ab8b8b5e028273856b28cc93268bfab192f42f6b9aced306d02985be10-json.log",
	        "Name": "/force-systemd-env-760371",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-760371:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-760371",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1a4745ab8b8b5e028273856b28cc93268bfab192f42f6b9aced306d02985be10",
	                "LowerDir": "/var/lib/docker/overlay2/6ca684a3dbb5fd01f86c41b13046aba66cb2725aa70dca206c3e4827b15e4334-init/diff:/var/lib/docker/overlay2/950dad8a7486c25883a9845454dded3949e8f3a53166d005e6749ff210bcab80/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6ca684a3dbb5fd01f86c41b13046aba66cb2725aa70dca206c3e4827b15e4334/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6ca684a3dbb5fd01f86c41b13046aba66cb2725aa70dca206c3e4827b15e4334/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6ca684a3dbb5fd01f86c41b13046aba66cb2725aa70dca206c3e4827b15e4334/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-760371",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-760371/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-760371",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-760371",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-760371",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "63100c245467f5a14cb8f6ea7f155b90be90e358df3a7911dfa03af6ecb771c3",
	            "SandboxKey": "/var/run/docker/netns/63100c245467",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33035"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33036"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33039"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33037"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33038"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-760371": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:a0:c1:ab:ba:d3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6cf41fc12578d9dd8ef61f5a2179ab066d1670029e2853f3cfdce4ecf47aa4a7",
	                    "EndpointID": "84151be780d6ae8475b9113768f2d290dfe7cd9ac0b7d31e77bb29d2ba5fc083",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-760371",
	                        "1a4745ab8b8b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-760371 -n force-systemd-env-760371
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-760371 -n force-systemd-env-760371: exit status 6 (342.257535ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 19:49:21.565512  181496 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-760371" does not appear in /home/jenkins/minikube-integration/21701-2540/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-760371 logs -n 25
helpers_test.go:260: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-053944 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo systemctl status docker --all --full --no-pager                                      │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo systemctl cat docker --no-pager                                                      │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo cat /etc/docker/daemon.json                                                          │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo docker system info                                                                   │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo cri-dockerd --version                                                                │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo systemctl cat containerd --no-pager                                                  │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo cat /etc/containerd/config.toml                                                      │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo containerd config dump                                                               │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo systemctl status crio --all --full --no-pager                                        │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo systemctl cat crio --no-pager                                                        │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo crio config                                                                          │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ delete  │ -p cilium-053944                                                                                           │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │ 06 Oct 25 19:40 UTC │
	│ start   │ -p force-systemd-env-760371 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-760371  │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ force-systemd-flag-203169 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                       │ force-systemd-flag-203169 │ jenkins │ v1.37.0 │ 06 Oct 25 19:47 UTC │ 06 Oct 25 19:47 UTC │
	│ delete  │ -p force-systemd-flag-203169                                                                               │ force-systemd-flag-203169 │ jenkins │ v1.37.0 │ 06 Oct 25 19:47 UTC │ 06 Oct 25 19:47 UTC │
	│ start   │ -p cert-expiration-585086 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio     │ cert-expiration-585086    │ jenkins │ v1.37.0 │ 06 Oct 25 19:47 UTC │ 06 Oct 25 19:48 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 19:47:53
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 19:47:53.456682  178790 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:47:53.456853  178790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:47:53.456858  178790 out.go:374] Setting ErrFile to fd 2...
	I1006 19:47:53.456862  178790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:47:53.457135  178790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:47:53.457547  178790 out.go:368] Setting JSON to false
	I1006 19:47:53.458447  178790 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5409,"bootTime":1759774665,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1006 19:47:53.458507  178790 start.go:140] virtualization:  
	I1006 19:47:53.462429  178790 out.go:179] * [cert-expiration-585086] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 19:47:53.467086  178790 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 19:47:53.467201  178790 notify.go:220] Checking for updates...
	I1006 19:47:53.473862  178790 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 19:47:53.477178  178790 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:47:53.480320  178790 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	I1006 19:47:53.483568  178790 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 19:47:53.486761  178790 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 19:47:53.490380  178790 config.go:182] Loaded profile config "force-systemd-env-760371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:47:53.490483  178790 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 19:47:53.521955  178790 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 19:47:53.522088  178790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:47:53.592793  178790 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:47:53.579318629 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:47:53.592888  178790 docker.go:318] overlay module found
	I1006 19:47:53.596253  178790 out.go:179] * Using the docker driver based on user configuration
	I1006 19:47:53.599302  178790 start.go:304] selected driver: docker
	I1006 19:47:53.599309  178790 start.go:924] validating driver "docker" against <nil>
	I1006 19:47:53.599320  178790 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 19:47:53.600125  178790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:47:53.657314  178790 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:47:53.648022775 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:47:53.657461  178790 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 19:47:53.657678  178790 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1006 19:47:53.660540  178790 out.go:179] * Using Docker driver with root privileges
	I1006 19:47:53.663476  178790 cni.go:84] Creating CNI manager for ""
	I1006 19:47:53.663530  178790 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:47:53.663538  178790 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 19:47:53.663619  178790 start.go:348] cluster config:
	{Name:cert-expiration-585086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-585086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:47:53.666858  178790 out.go:179] * Starting "cert-expiration-585086" primary control-plane node in "cert-expiration-585086" cluster
	I1006 19:47:53.669737  178790 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 19:47:53.672678  178790 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 19:47:53.675520  178790 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:47:53.675567  178790 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1006 19:47:53.675577  178790 cache.go:58] Caching tarball of preloaded images
	I1006 19:47:53.675581  178790 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 19:47:53.675662  178790 preload.go:233] Found /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1006 19:47:53.675670  178790 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 19:47:53.675800  178790 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/config.json ...
	I1006 19:47:53.675819  178790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/config.json: {Name:mke8e7a42964d3fe10ece2a6b48190719acaa4a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:47:53.695064  178790 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 19:47:53.695075  178790 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 19:47:53.695095  178790 cache.go:232] Successfully downloaded all kic artifacts
	I1006 19:47:53.695115  178790 start.go:360] acquireMachinesLock for cert-expiration-585086: {Name:mkfbc592fc0fdee897fdcca1ec0865b663d6035c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 19:47:53.695212  178790 start.go:364] duration metric: took 82.684µs to acquireMachinesLock for "cert-expiration-585086"
	I1006 19:47:53.695234  178790 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-585086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-585086 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 19:47:53.695293  178790 start.go:125] createHost starting for "" (driver="docker")
	I1006 19:47:53.698731  178790 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1006 19:47:53.698958  178790 start.go:159] libmachine.API.Create for "cert-expiration-585086" (driver="docker")
	I1006 19:47:53.699003  178790 client.go:168] LocalClient.Create starting
	I1006 19:47:53.699103  178790 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem
	I1006 19:47:53.699143  178790 main.go:141] libmachine: Decoding PEM data...
	I1006 19:47:53.699158  178790 main.go:141] libmachine: Parsing certificate...
	I1006 19:47:53.699221  178790 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem
	I1006 19:47:53.699251  178790 main.go:141] libmachine: Decoding PEM data...
	I1006 19:47:53.699259  178790 main.go:141] libmachine: Parsing certificate...
	I1006 19:47:53.699630  178790 cli_runner.go:164] Run: docker network inspect cert-expiration-585086 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 19:47:53.715777  178790 cli_runner.go:211] docker network inspect cert-expiration-585086 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 19:47:53.715863  178790 network_create.go:284] running [docker network inspect cert-expiration-585086] to gather additional debugging logs...
	I1006 19:47:53.715880  178790 cli_runner.go:164] Run: docker network inspect cert-expiration-585086
	W1006 19:47:53.738681  178790 cli_runner.go:211] docker network inspect cert-expiration-585086 returned with exit code 1
	I1006 19:47:53.738702  178790 network_create.go:287] error running [docker network inspect cert-expiration-585086]: docker network inspect cert-expiration-585086: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-expiration-585086 not found
	I1006 19:47:53.738714  178790 network_create.go:289] output of [docker network inspect cert-expiration-585086]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-expiration-585086 not found
	
	** /stderr **
	I1006 19:47:53.738829  178790 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:47:53.756146  178790 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7058eae896da IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:57:c0:bd:de:1b} reservation:<nil>}
	I1006 19:47:53.756458  178790 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e321ce8cf6dc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:12:e6:76:87:fe:bb} reservation:<nil>}
	I1006 19:47:53.756767  178790 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-17bdbd7bd7be IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:22:93:f3:19:55:10} reservation:<nil>}
	I1006 19:47:53.756991  178790 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-6cf41fc12578 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ca:52:43:da:c6:96} reservation:<nil>}
	I1006 19:47:53.757378  178790 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a37a80}
	I1006 19:47:53.757397  178790 network_create.go:124] attempt to create docker network cert-expiration-585086 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1006 19:47:53.757450  178790 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-585086 cert-expiration-585086
	I1006 19:47:53.830519  178790 network_create.go:108] docker network cert-expiration-585086 192.168.85.0/24 created
	I1006 19:47:53.830541  178790 kic.go:121] calculated static IP "192.168.85.2" for the "cert-expiration-585086" container
	I1006 19:47:53.830613  178790 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 19:47:53.847465  178790 cli_runner.go:164] Run: docker volume create cert-expiration-585086 --label name.minikube.sigs.k8s.io=cert-expiration-585086 --label created_by.minikube.sigs.k8s.io=true
	I1006 19:47:53.866038  178790 oci.go:103] Successfully created a docker volume cert-expiration-585086
	I1006 19:47:53.866122  178790 cli_runner.go:164] Run: docker run --rm --name cert-expiration-585086-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-585086 --entrypoint /usr/bin/test -v cert-expiration-585086:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 19:47:54.402069  178790 oci.go:107] Successfully prepared a docker volume cert-expiration-585086
	I1006 19:47:54.402107  178790 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:47:54.402124  178790 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 19:47:54.402207  178790 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-585086:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 19:47:58.874490  178790 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-585086:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.47221955s)
	I1006 19:47:58.874512  178790 kic.go:203] duration metric: took 4.472383655s to extract preloaded images to volume ...
	W1006 19:47:58.874651  178790 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1006 19:47:58.874789  178790 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 19:47:58.944371  178790 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-585086 --name cert-expiration-585086 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-585086 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-585086 --network cert-expiration-585086 --ip 192.168.85.2 --volume cert-expiration-585086:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 19:47:59.260887  178790 cli_runner.go:164] Run: docker container inspect cert-expiration-585086 --format={{.State.Running}}
	I1006 19:47:59.282922  178790 cli_runner.go:164] Run: docker container inspect cert-expiration-585086 --format={{.State.Status}}
	I1006 19:47:59.308973  178790 cli_runner.go:164] Run: docker exec cert-expiration-585086 stat /var/lib/dpkg/alternatives/iptables
	I1006 19:47:59.360730  178790 oci.go:144] the created container "cert-expiration-585086" has a running status.
	I1006 19:47:59.360758  178790 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/cert-expiration-585086/id_rsa...
	I1006 19:48:00.028623  178790 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-2540/.minikube/machines/cert-expiration-585086/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 19:48:00.061342  178790 cli_runner.go:164] Run: docker container inspect cert-expiration-585086 --format={{.State.Status}}
	I1006 19:48:00.089879  178790 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 19:48:00.089891  178790 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-585086 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 19:48:00.190321  178790 cli_runner.go:164] Run: docker container inspect cert-expiration-585086 --format={{.State.Status}}
	I1006 19:48:00.277217  178790 machine.go:93] provisionDockerMachine start ...
	I1006 19:48:00.277328  178790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-585086
	I1006 19:48:00.350768  178790 main.go:141] libmachine: Using SSH client type: native
	I1006 19:48:00.351120  178790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33040 <nil> <nil>}
	I1006 19:48:00.351128  178790 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 19:48:00.556978  178790 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-585086
	
	I1006 19:48:00.556996  178790 ubuntu.go:182] provisioning hostname "cert-expiration-585086"
	I1006 19:48:00.557099  178790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-585086
	I1006 19:48:00.583530  178790 main.go:141] libmachine: Using SSH client type: native
	I1006 19:48:00.583879  178790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33040 <nil> <nil>}
	I1006 19:48:00.583889  178790 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-585086 && echo "cert-expiration-585086" | sudo tee /etc/hostname
	I1006 19:48:00.756876  178790 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-585086
	
	I1006 19:48:00.756947  178790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-585086
	I1006 19:48:00.776112  178790 main.go:141] libmachine: Using SSH client type: native
	I1006 19:48:00.776449  178790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33040 <nil> <nil>}
	I1006 19:48:00.776467  178790 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-585086' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-585086/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-585086' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 19:48:00.919986  178790 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 19:48:00.920003  178790 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-2540/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-2540/.minikube}
	I1006 19:48:00.920021  178790 ubuntu.go:190] setting up certificates
	I1006 19:48:00.920028  178790 provision.go:84] configureAuth start
	I1006 19:48:00.920084  178790 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-585086
	I1006 19:48:00.937279  178790 provision.go:143] copyHostCerts
	I1006 19:48:00.937338  178790 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem, removing ...
	I1006 19:48:00.937346  178790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem
	I1006 19:48:00.937428  178790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem (1082 bytes)
	I1006 19:48:00.937835  178790 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem, removing ...
	I1006 19:48:00.937847  178790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem
	I1006 19:48:00.937912  178790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem (1123 bytes)
	I1006 19:48:00.937987  178790 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem, removing ...
	I1006 19:48:00.937991  178790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem
	I1006 19:48:00.938015  178790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem (1675 bytes)
	I1006 19:48:00.938081  178790 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-585086 san=[127.0.0.1 192.168.85.2 cert-expiration-585086 localhost minikube]
	I1006 19:48:01.023979  178790 provision.go:177] copyRemoteCerts
	I1006 19:48:01.024031  178790 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 19:48:01.024073  178790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-585086
	I1006 19:48:01.042920  178790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33040 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/cert-expiration-585086/id_rsa Username:docker}
	I1006 19:48:01.139916  178790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 19:48:01.160040  178790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 19:48:01.178879  178790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1006 19:48:01.199019  178790 provision.go:87] duration metric: took 278.977714ms to configureAuth
	I1006 19:48:01.199043  178790 ubuntu.go:206] setting minikube options for container-runtime
	I1006 19:48:01.199228  178790 config.go:182] Loaded profile config "cert-expiration-585086": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:48:01.199330  178790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-585086
	I1006 19:48:01.217324  178790 main.go:141] libmachine: Using SSH client type: native
	I1006 19:48:01.217691  178790 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33040 <nil> <nil>}
	I1006 19:48:01.217706  178790 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 19:48:01.463398  178790 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 19:48:01.463410  178790 machine.go:96] duration metric: took 1.186180097s to provisionDockerMachine
	I1006 19:48:01.463418  178790 client.go:171] duration metric: took 7.764410378s to LocalClient.Create
	I1006 19:48:01.463430  178790 start.go:167] duration metric: took 7.764473697s to libmachine.API.Create "cert-expiration-585086"
	I1006 19:48:01.463436  178790 start.go:293] postStartSetup for "cert-expiration-585086" (driver="docker")
	I1006 19:48:01.463445  178790 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 19:48:01.463510  178790 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 19:48:01.463549  178790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-585086
	I1006 19:48:01.481115  178790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33040 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/cert-expiration-585086/id_rsa Username:docker}
	I1006 19:48:01.580141  178790 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 19:48:01.583631  178790 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 19:48:01.583649  178790 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 19:48:01.583667  178790 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/addons for local assets ...
	I1006 19:48:01.583734  178790 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/files for local assets ...
	I1006 19:48:01.583821  178790 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> 43502.pem in /etc/ssl/certs
	I1006 19:48:01.583925  178790 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 19:48:01.591895  178790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:48:01.610622  178790 start.go:296] duration metric: took 147.171903ms for postStartSetup
	I1006 19:48:01.611010  178790 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-585086
	I1006 19:48:01.628003  178790 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/config.json ...
	I1006 19:48:01.628286  178790 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 19:48:01.628338  178790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-585086
	I1006 19:48:01.646286  178790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33040 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/cert-expiration-585086/id_rsa Username:docker}
	I1006 19:48:01.744777  178790 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 19:48:01.749590  178790 start.go:128] duration metric: took 8.054284685s to createHost
	I1006 19:48:01.749604  178790 start.go:83] releasing machines lock for "cert-expiration-585086", held for 8.054385404s
	I1006 19:48:01.749678  178790 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-585086
	I1006 19:48:01.769663  178790 ssh_runner.go:195] Run: cat /version.json
	I1006 19:48:01.769708  178790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-585086
	I1006 19:48:01.769970  178790 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 19:48:01.770023  178790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-585086
	I1006 19:48:01.790127  178790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33040 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/cert-expiration-585086/id_rsa Username:docker}
	I1006 19:48:01.801289  178790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33040 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/cert-expiration-585086/id_rsa Username:docker}
	I1006 19:48:01.891422  178790 ssh_runner.go:195] Run: systemctl --version
	I1006 19:48:01.984737  178790 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 19:48:02.024742  178790 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 19:48:02.029495  178790 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 19:48:02.029558  178790 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 19:48:02.059609  178790 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1006 19:48:02.059623  178790 start.go:495] detecting cgroup driver to use...
	I1006 19:48:02.059653  178790 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 19:48:02.059744  178790 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 19:48:02.084948  178790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 19:48:02.098188  178790 docker.go:218] disabling cri-docker service (if available) ...
	I1006 19:48:02.098244  178790 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 19:48:02.116153  178790 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 19:48:02.134743  178790 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 19:48:02.261948  178790 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 19:48:02.398536  178790 docker.go:234] disabling docker service ...
	I1006 19:48:02.398594  178790 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 19:48:02.437959  178790 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 19:48:02.455068  178790 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 19:48:02.569885  178790 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 19:48:02.687479  178790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 19:48:02.702134  178790 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 19:48:02.716095  178790 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 19:48:02.716155  178790 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:48:02.724941  178790 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 19:48:02.725005  178790 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:48:02.734078  178790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:48:02.743894  178790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:48:02.753339  178790 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 19:48:02.762191  178790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:48:02.771492  178790 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:48:02.785773  178790 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:48:02.795298  178790 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 19:48:02.802981  178790 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 19:48:02.810459  178790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:48:02.926305  178790 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 19:48:03.057023  178790 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 19:48:03.057084  178790 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 19:48:03.061128  178790 start.go:563] Will wait 60s for crictl version
	I1006 19:48:03.061184  178790 ssh_runner.go:195] Run: which crictl
	I1006 19:48:03.064841  178790 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 19:48:03.090302  178790 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 19:48:03.090388  178790 ssh_runner.go:195] Run: crio --version
	I1006 19:48:03.119455  178790 ssh_runner.go:195] Run: crio --version
	I1006 19:48:03.150624  178790 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 19:48:03.153542  178790 cli_runner.go:164] Run: docker network inspect cert-expiration-585086 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:48:03.170373  178790 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1006 19:48:03.174334  178790 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:48:03.184409  178790 kubeadm.go:883] updating cluster {Name:cert-expiration-585086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-585086 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 19:48:03.184518  178790 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:48:03.184574  178790 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:48:03.221049  178790 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:48:03.221061  178790 crio.go:433] Images already preloaded, skipping extraction
	I1006 19:48:03.221116  178790 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:48:03.247298  178790 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:48:03.247311  178790 cache_images.go:85] Images are preloaded, skipping loading
	I1006 19:48:03.247319  178790 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1006 19:48:03.247404  178790 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-585086 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-585086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 19:48:03.247486  178790 ssh_runner.go:195] Run: crio config
	I1006 19:48:03.298794  178790 cni.go:84] Creating CNI manager for ""
	I1006 19:48:03.298805  178790 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:48:03.298819  178790 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 19:48:03.298842  178790 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-585086 NodeName:cert-expiration-585086 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 19:48:03.298968  178790 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-585086"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 19:48:03.299041  178790 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 19:48:03.306789  178790 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 19:48:03.306862  178790 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 19:48:03.314609  178790 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1006 19:48:03.327002  178790 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 19:48:03.339844  178790 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1006 19:48:03.352663  178790 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1006 19:48:03.356292  178790 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:48:03.365778  178790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:48:03.491857  178790 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:48:03.509381  178790 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086 for IP: 192.168.85.2
	I1006 19:48:03.509393  178790 certs.go:195] generating shared ca certs ...
	I1006 19:48:03.509407  178790 certs.go:227] acquiring lock for ca certs: {Name:mke29bef1b13829052576090d66d8864f7cbc64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:48:03.509564  178790 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key
	I1006 19:48:03.509602  178790 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key
	I1006 19:48:03.509607  178790 certs.go:257] generating profile certs ...
	I1006 19:48:03.509681  178790 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/client.key
	I1006 19:48:03.509698  178790 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/client.crt with IP's: []
	I1006 19:48:03.595677  178790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/client.crt ...
	I1006 19:48:03.595716  178790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/client.crt: {Name:mk8473ec9720591155919170682d2ac639e105fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:48:03.595924  178790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/client.key ...
	I1006 19:48:03.595932  178790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/client.key: {Name:mkc87aac4183b42285cb6774ea0a6dcb476e17d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:48:03.596020  178790 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/apiserver.key.ed43be81
	I1006 19:48:03.596032  178790 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/apiserver.crt.ed43be81 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1006 19:48:05.976721  178790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/apiserver.crt.ed43be81 ...
	I1006 19:48:05.976738  178790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/apiserver.crt.ed43be81: {Name:mk863ad8f47c0cbc189a8bb47c1df0fd87d4f18e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:48:05.976928  178790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/apiserver.key.ed43be81 ...
	I1006 19:48:05.976935  178790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/apiserver.key.ed43be81: {Name:mk32f16e988318df6b5c39b4c714ae089d96c470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:48:05.977019  178790 certs.go:382] copying /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/apiserver.crt.ed43be81 -> /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/apiserver.crt
	I1006 19:48:05.977096  178790 certs.go:386] copying /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/apiserver.key.ed43be81 -> /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/apiserver.key
	I1006 19:48:05.977148  178790 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/proxy-client.key
	I1006 19:48:05.977160  178790 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/proxy-client.crt with IP's: []
	I1006 19:48:06.286351  178790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/proxy-client.crt ...
	I1006 19:48:06.286366  178790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/proxy-client.crt: {Name:mk2da6e2bcee84c7358b2d6e238289028d367c94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:48:06.286552  178790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/proxy-client.key ...
	I1006 19:48:06.286559  178790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/proxy-client.key: {Name:mkfed1ff538c182780c1e27f0a69fda9d828d407 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:48:06.286735  178790 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem (1338 bytes)
	W1006 19:48:06.286770  178790 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350_empty.pem, impossibly tiny 0 bytes
	I1006 19:48:06.286777  178790 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 19:48:06.286801  178790 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem (1082 bytes)
	I1006 19:48:06.286822  178790 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem (1123 bytes)
	I1006 19:48:06.286847  178790 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem (1675 bytes)
	I1006 19:48:06.286893  178790 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:48:06.287550  178790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 19:48:06.306080  178790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 19:48:06.324257  178790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 19:48:06.342582  178790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 19:48:06.360638  178790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1006 19:48:06.379053  178790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 19:48:06.397051  178790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 19:48:06.414806  178790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 19:48:06.432930  178790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 19:48:06.451276  178790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem --> /usr/share/ca-certificates/4350.pem (1338 bytes)
	I1006 19:48:06.468691  178790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /usr/share/ca-certificates/43502.pem (1708 bytes)
	I1006 19:48:06.486611  178790 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 19:48:06.500944  178790 ssh_runner.go:195] Run: openssl version
	I1006 19:48:06.507048  178790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 19:48:06.515433  178790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:48:06.519356  178790 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:48:06.519422  178790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:48:06.561286  178790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 19:48:06.571925  178790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4350.pem && ln -fs /usr/share/ca-certificates/4350.pem /etc/ssl/certs/4350.pem"
	I1006 19:48:06.581781  178790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4350.pem
	I1006 19:48:06.586148  178790 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 18:49 /usr/share/ca-certificates/4350.pem
	I1006 19:48:06.586200  178790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4350.pem
	I1006 19:48:06.631397  178790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4350.pem /etc/ssl/certs/51391683.0"
	I1006 19:48:06.640557  178790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43502.pem && ln -fs /usr/share/ca-certificates/43502.pem /etc/ssl/certs/43502.pem"
	I1006 19:48:06.649521  178790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43502.pem
	I1006 19:48:06.653649  178790 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 18:49 /usr/share/ca-certificates/43502.pem
	I1006 19:48:06.653721  178790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43502.pem
	I1006 19:48:06.695520  178790 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43502.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 19:48:06.704315  178790 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 19:48:06.708091  178790 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 19:48:06.708136  178790 kubeadm.go:400] StartCluster: {Name:cert-expiration-585086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-585086 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:48:06.708201  178790 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 19:48:06.708272  178790 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 19:48:06.735031  178790 cri.go:89] found id: ""
	I1006 19:48:06.735096  178790 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 19:48:06.743084  178790 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 19:48:06.750937  178790 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 19:48:06.751002  178790 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 19:48:06.758838  178790 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 19:48:06.758846  178790 kubeadm.go:157] found existing configuration files:
	
	I1006 19:48:06.758897  178790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 19:48:06.766648  178790 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 19:48:06.766703  178790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 19:48:06.774201  178790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 19:48:06.782172  178790 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 19:48:06.782227  178790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 19:48:06.789752  178790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 19:48:06.797793  178790 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 19:48:06.797866  178790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 19:48:06.805808  178790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 19:48:06.813913  178790 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 19:48:06.813967  178790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 19:48:06.821462  178790 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 19:48:06.860060  178790 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 19:48:06.860112  178790 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 19:48:06.884267  178790 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 19:48:06.884335  178790 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1006 19:48:06.884370  178790 kubeadm.go:318] OS: Linux
	I1006 19:48:06.884416  178790 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 19:48:06.884466  178790 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1006 19:48:06.884514  178790 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 19:48:06.884564  178790 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 19:48:06.884613  178790 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 19:48:06.884662  178790 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 19:48:06.884708  178790 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 19:48:06.884757  178790 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 19:48:06.884805  178790 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1006 19:48:06.961543  178790 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 19:48:06.961651  178790 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 19:48:06.961745  178790 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 19:48:06.971537  178790 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 19:48:06.976203  178790 out.go:252]   - Generating certificates and keys ...
	I1006 19:48:06.976295  178790 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 19:48:06.976365  178790 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 19:48:07.925063  178790 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 19:48:08.161984  178790 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 19:48:08.469751  178790 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 19:48:08.947134  178790 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 19:48:09.316263  178790 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 19:48:09.316560  178790 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-585086 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1006 19:48:10.263373  178790 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 19:48:10.263686  178790 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-585086 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1006 19:48:10.458537  178790 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 19:48:11.813736  178790 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 19:48:12.504414  178790 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 19:48:12.504727  178790 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 19:48:12.839662  178790 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 19:48:13.437795  178790 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 19:48:13.529094  178790 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 19:48:14.472517  178790 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 19:48:15.012069  178790 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 19:48:15.020431  178790 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 19:48:15.020515  178790 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 19:48:15.035411  178790 out.go:252]   - Booting up control plane ...
	I1006 19:48:15.035541  178790 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 19:48:15.035623  178790 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 19:48:15.035716  178790 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 19:48:15.058526  178790 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 19:48:15.058641  178790 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 19:48:15.068599  178790 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 19:48:15.069271  178790 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 19:48:15.069562  178790 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 19:48:15.208301  178790 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 19:48:15.208418  178790 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 19:48:17.207343  178790 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001766307s
	I1006 19:48:17.211004  178790 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 19:48:17.211095  178790 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1006 19:48:17.211187  178790 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 19:48:17.211268  178790 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 19:48:19.220746  178790 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.009247702s
	I1006 19:48:20.988764  178790 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.777747281s
	I1006 19:48:22.713042  178790 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.501822701s
	I1006 19:48:22.733012  178790 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1006 19:48:22.748799  178790 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1006 19:48:22.764045  178790 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1006 19:48:22.764251  178790 kubeadm.go:318] [mark-control-plane] Marking the node cert-expiration-585086 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1006 19:48:22.777051  178790 kubeadm.go:318] [bootstrap-token] Using token: 9pukwn.4mp7dl95xh9n9o1p
	I1006 19:48:22.780122  178790 out.go:252]   - Configuring RBAC rules ...
	I1006 19:48:22.780263  178790 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1006 19:48:22.784840  178790 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1006 19:48:22.793421  178790 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1006 19:48:22.797649  178790 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1006 19:48:22.803675  178790 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1006 19:48:22.807774  178790 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1006 19:48:23.123259  178790 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1006 19:48:23.553868  178790 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1006 19:48:24.119478  178790 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1006 19:48:24.120615  178790 kubeadm.go:318] 
	I1006 19:48:24.120685  178790 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1006 19:48:24.120690  178790 kubeadm.go:318] 
	I1006 19:48:24.120770  178790 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1006 19:48:24.120774  178790 kubeadm.go:318] 
	I1006 19:48:24.120799  178790 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1006 19:48:24.120860  178790 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1006 19:48:24.120912  178790 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1006 19:48:24.120915  178790 kubeadm.go:318] 
	I1006 19:48:24.120974  178790 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1006 19:48:24.120978  178790 kubeadm.go:318] 
	I1006 19:48:24.121027  178790 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1006 19:48:24.121030  178790 kubeadm.go:318] 
	I1006 19:48:24.121084  178790 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1006 19:48:24.121161  178790 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1006 19:48:24.121231  178790 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1006 19:48:24.121235  178790 kubeadm.go:318] 
	I1006 19:48:24.121321  178790 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1006 19:48:24.121401  178790 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1006 19:48:24.121405  178790 kubeadm.go:318] 
	I1006 19:48:24.121491  178790 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 9pukwn.4mp7dl95xh9n9o1p \
	I1006 19:48:24.121598  178790 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:201c13f59a2d25d619915dd6f8e3b060debb373b102f8c8209c331adce5a89dd \
	I1006 19:48:24.121618  178790 kubeadm.go:318] 	--control-plane 
	I1006 19:48:24.121621  178790 kubeadm.go:318] 
	I1006 19:48:24.121708  178790 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1006 19:48:24.121712  178790 kubeadm.go:318] 
	I1006 19:48:24.121796  178790 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 9pukwn.4mp7dl95xh9n9o1p \
	I1006 19:48:24.121919  178790 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:201c13f59a2d25d619915dd6f8e3b060debb373b102f8c8209c331adce5a89dd 
	I1006 19:48:24.126665  178790 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1006 19:48:24.126893  178790 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1006 19:48:24.127005  178790 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 19:48:24.127022  178790 cni.go:84] Creating CNI manager for ""
	I1006 19:48:24.127028  178790 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:48:24.130157  178790 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1006 19:48:24.133003  178790 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1006 19:48:24.136923  178790 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1006 19:48:24.136934  178790 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1006 19:48:24.151567  178790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1006 19:48:24.454417  178790 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1006 19:48:24.454523  178790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:48:24.454571  178790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-585086 minikube.k8s.io/updated_at=2025_10_06T19_48_24_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81 minikube.k8s.io/name=cert-expiration-585086 minikube.k8s.io/primary=true
	I1006 19:48:24.614750  178790 ops.go:34] apiserver oom_adj: -16
	I1006 19:48:24.614785  178790 kubeadm.go:1113] duration metric: took 160.319122ms to wait for elevateKubeSystemPrivileges
	I1006 19:48:24.614801  178790 kubeadm.go:402] duration metric: took 17.906669025s to StartCluster
	I1006 19:48:24.614829  178790 settings.go:142] acquiring lock: {Name:mkaade96c66593c653256adaeb0a3029ca60e0c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:48:24.614886  178790 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:48:24.615534  178790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/kubeconfig: {Name:mkf8cce8f9dace5c4d41967d6ad8df5e03ba53f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:48:24.615761  178790 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 19:48:24.615835  178790 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1006 19:48:24.616039  178790 config.go:182] Loaded profile config "cert-expiration-585086": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:48:24.616068  178790 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 19:48:24.616128  178790 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-585086"
	I1006 19:48:24.616142  178790 addons.go:238] Setting addon storage-provisioner=true in "cert-expiration-585086"
	I1006 19:48:24.616161  178790 host.go:66] Checking if "cert-expiration-585086" exists ...
	I1006 19:48:24.616629  178790 cli_runner.go:164] Run: docker container inspect cert-expiration-585086 --format={{.State.Status}}
	I1006 19:48:24.617193  178790 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-585086"
	I1006 19:48:24.617210  178790 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-585086"
	I1006 19:48:24.617531  178790 cli_runner.go:164] Run: docker container inspect cert-expiration-585086 --format={{.State.Status}}
	I1006 19:48:24.620458  178790 out.go:179] * Verifying Kubernetes components...
	I1006 19:48:24.629779  178790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:48:24.656080  178790 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 19:48:24.659147  178790 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 19:48:24.659159  178790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 19:48:24.659230  178790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-585086
	I1006 19:48:24.667765  178790 addons.go:238] Setting addon default-storageclass=true in "cert-expiration-585086"
	I1006 19:48:24.667794  178790 host.go:66] Checking if "cert-expiration-585086" exists ...
	I1006 19:48:24.668237  178790 cli_runner.go:164] Run: docker container inspect cert-expiration-585086 --format={{.State.Status}}
	I1006 19:48:24.697959  178790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33040 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/cert-expiration-585086/id_rsa Username:docker}
	I1006 19:48:24.710007  178790 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 19:48:24.710019  178790 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 19:48:24.710087  178790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-585086
	I1006 19:48:24.746160  178790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33040 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/cert-expiration-585086/id_rsa Username:docker}
	I1006 19:48:24.895650  178790 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1006 19:48:24.934378  178790 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:48:24.940211  178790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 19:48:24.993132  178790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 19:48:25.316894  178790 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1006 19:48:25.318798  178790 api_server.go:52] waiting for apiserver process to appear ...
	I1006 19:48:25.318849  178790 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 19:48:25.543159  178790 api_server.go:72] duration metric: took 927.374091ms to wait for apiserver process to appear ...
	I1006 19:48:25.543169  178790 api_server.go:88] waiting for apiserver healthz status ...
	I1006 19:48:25.543184  178790 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1006 19:48:25.556397  178790 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1006 19:48:25.558480  178790 api_server.go:141] control plane version: v1.34.1
	I1006 19:48:25.558503  178790 api_server.go:131] duration metric: took 15.323238ms to wait for apiserver health ...
	I1006 19:48:25.558512  178790 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 19:48:25.562965  178790 system_pods.go:59] 5 kube-system pods found
	I1006 19:48:25.562990  178790 system_pods.go:61] "etcd-cert-expiration-585086" [b790c765-8883-48b2-a3f5-e6f53a439757] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 19:48:25.562999  178790 system_pods.go:61] "kube-apiserver-cert-expiration-585086" [18d6bd33-b6a9-464b-a76c-f6ab40274a70] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 19:48:25.563011  178790 system_pods.go:61] "kube-controller-manager-cert-expiration-585086" [caa02d40-6033-41bc-9c30-e63e576affe3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 19:48:25.563019  178790 system_pods.go:61] "kube-scheduler-cert-expiration-585086" [2282f7b9-c57d-4e62-b85a-2429bea2443f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 19:48:25.563024  178790 system_pods.go:61] "storage-provisioner" [5f490a8f-bf12-4ea5-8b20-ba1ef50d3c6e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1006 19:48:25.563029  178790 system_pods.go:74] duration metric: took 4.512256ms to wait for pod list to return data ...
	I1006 19:48:25.563040  178790 kubeadm.go:586] duration metric: took 947.259218ms to wait for: map[apiserver:true system_pods:true]
	I1006 19:48:25.563051  178790 node_conditions.go:102] verifying NodePressure condition ...
	I1006 19:48:25.566235  178790 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 19:48:25.566252  178790 node_conditions.go:123] node cpu capacity is 2
	I1006 19:48:25.566263  178790 node_conditions.go:105] duration metric: took 3.208647ms to run NodePressure ...
	I1006 19:48:25.566274  178790 start.go:241] waiting for startup goroutines ...
	I1006 19:48:25.566609  178790 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1006 19:48:25.569430  178790 addons.go:514] duration metric: took 953.341656ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1006 19:48:25.821527  178790 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-585086" context rescaled to 1 replicas
	I1006 19:48:25.821567  178790 start.go:246] waiting for cluster config update ...
	I1006 19:48:25.821578  178790 start.go:255] writing updated cluster config ...
	I1006 19:48:25.821905  178790 ssh_runner.go:195] Run: rm -f paused
	I1006 19:48:25.880960  178790 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1006 19:48:25.884163  178790 out.go:179] * Done! kubectl is now configured to use "cert-expiration-585086" cluster and "default" namespace by default
	I1006 19:49:20.637518  174295 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000395394s
	I1006 19:49:20.637975  174295 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000478989s
	I1006 19:49:20.639951  174295 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001315194s
	I1006 19:49:20.639974  174295 kubeadm.go:318] 
	I1006 19:49:20.640345  174295 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1006 19:49:20.640497  174295 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1006 19:49:20.640658  174295 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1006 19:49:20.640831  174295 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1006 19:49:20.641258  174295 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1006 19:49:20.641412  174295 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1006 19:49:20.641418  174295 kubeadm.go:318] 
	I1006 19:49:20.646551  174295 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1006 19:49:20.646889  174295 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1006 19:49:20.647051  174295 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 19:49:20.647734  174295 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1006 19:49:20.647868  174295 kubeadm.go:402] duration metric: took 8m13.988260704s to StartCluster
	I1006 19:49:20.647918  174295 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1006 19:49:20.647920  174295 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 19:49:20.648072  174295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 19:49:20.674405  174295 cri.go:89] found id: ""
	I1006 19:49:20.674448  174295 logs.go:282] 0 containers: []
	W1006 19:49:20.674457  174295 logs.go:284] No container was found matching "kube-apiserver"
	I1006 19:49:20.674464  174295 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 19:49:20.674544  174295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 19:49:20.701758  174295 cri.go:89] found id: ""
	I1006 19:49:20.701781  174295 logs.go:282] 0 containers: []
	W1006 19:49:20.701790  174295 logs.go:284] No container was found matching "etcd"
	I1006 19:49:20.701796  174295 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 19:49:20.701873  174295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 19:49:20.729034  174295 cri.go:89] found id: ""
	I1006 19:49:20.729107  174295 logs.go:282] 0 containers: []
	W1006 19:49:20.729122  174295 logs.go:284] No container was found matching "coredns"
	I1006 19:49:20.729129  174295 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 19:49:20.729187  174295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 19:49:20.759630  174295 cri.go:89] found id: ""
	I1006 19:49:20.759656  174295 logs.go:282] 0 containers: []
	W1006 19:49:20.759664  174295 logs.go:284] No container was found matching "kube-scheduler"
	I1006 19:49:20.759671  174295 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 19:49:20.759754  174295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 19:49:20.785020  174295 cri.go:89] found id: ""
	I1006 19:49:20.785043  174295 logs.go:282] 0 containers: []
	W1006 19:49:20.785052  174295 logs.go:284] No container was found matching "kube-proxy"
	I1006 19:49:20.785058  174295 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 19:49:20.785149  174295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 19:49:20.811442  174295 cri.go:89] found id: ""
	I1006 19:49:20.811467  174295 logs.go:282] 0 containers: []
	W1006 19:49:20.811475  174295 logs.go:284] No container was found matching "kube-controller-manager"
	I1006 19:49:20.811482  174295 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 19:49:20.811558  174295 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 19:49:20.842643  174295 cri.go:89] found id: ""
	I1006 19:49:20.842668  174295 logs.go:282] 0 containers: []
	W1006 19:49:20.842680  174295 logs.go:284] No container was found matching "kindnet"
	I1006 19:49:20.842711  174295 logs.go:123] Gathering logs for dmesg ...
	I1006 19:49:20.842728  174295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 19:49:20.857508  174295 logs.go:123] Gathering logs for describe nodes ...
	I1006 19:49:20.857535  174295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 19:49:20.928425  174295 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 19:49:20.919545    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:49:20.920762    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:49:20.921563    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:49:20.922756    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:49:20.924346    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1006 19:49:20.919545    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:49:20.920762    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:49:20.921563    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:49:20.922756    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:49:20.924346    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 19:49:20.928448  174295 logs.go:123] Gathering logs for CRI-O ...
	I1006 19:49:20.928460  174295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 19:49:21.006753  174295 logs.go:123] Gathering logs for container status ...
	I1006 19:49:21.006787  174295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 19:49:21.041402  174295 logs.go:123] Gathering logs for kubelet ...
	I1006 19:49:21.041429  174295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1006 19:49:21.131368  174295 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500707292s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000395394s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000478989s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001315194s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1006 19:49:21.131428  174295 out.go:285] * 
	W1006 19:49:21.131481  174295 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500707292s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000395394s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000478989s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001315194s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 19:49:21.131497  174295 out.go:285] * 
	W1006 19:49:21.133691  174295 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 19:49:21.139580  174295 out.go:203] 
	W1006 19:49:21.143341  174295 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500707292s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000395394s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000478989s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001315194s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1006 19:49:21.143384  174295 out.go:285] * 
	I1006 19:49:21.146946  174295 out.go:203] 
	
	
	==> CRI-O <==
	Oct 06 19:49:13 force-systemd-env-760371 crio[838]: time="2025-10-06T19:49:13.434481097Z" level=info msg="createCtr: removing container dcbf04c17beceb166c9be2169874123ba71a6e3de0f55a0f39a6d7f3a1695a60" id=ce46ba53-4ccf-4a7a-9b45-7bf960b53f6b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:49:13 force-systemd-env-760371 crio[838]: time="2025-10-06T19:49:13.434571937Z" level=info msg="createCtr: deleting container dcbf04c17beceb166c9be2169874123ba71a6e3de0f55a0f39a6d7f3a1695a60 from storage" id=ce46ba53-4ccf-4a7a-9b45-7bf960b53f6b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:49:13 force-systemd-env-760371 crio[838]: time="2025-10-06T19:49:13.43960934Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-force-systemd-env-760371_kube-system_d46bbcd43549acab7c27b02998ace70a_0" id=ce46ba53-4ccf-4a7a-9b45-7bf960b53f6b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:49:18 force-systemd-env-760371 crio[838]: time="2025-10-06T19:49:18.410869282Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=e4acf7b0-e979-42b1-bc0f-a92a8c3f61b9 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:49:18 force-systemd-env-760371 crio[838]: time="2025-10-06T19:49:18.411936129Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=9ab8a9b6-62b3-46d2-99ee-d6392725851a name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:49:18 force-systemd-env-760371 crio[838]: time="2025-10-06T19:49:18.412924501Z" level=info msg="Creating container: kube-system/etcd-force-systemd-env-760371/etcd" id=eaae7d1f-5bd5-4b22-833e-d5a6173fe1c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:49:18 force-systemd-env-760371 crio[838]: time="2025-10-06T19:49:18.413154388Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:49:18 force-systemd-env-760371 crio[838]: time="2025-10-06T19:49:18.417723279Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:49:18 force-systemd-env-760371 crio[838]: time="2025-10-06T19:49:18.418228657Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:49:18 force-systemd-env-760371 crio[838]: time="2025-10-06T19:49:18.42852429Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=eaae7d1f-5bd5-4b22-833e-d5a6173fe1c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:49:18 force-systemd-env-760371 crio[838]: time="2025-10-06T19:49:18.429689789Z" level=info msg="createCtr: deleting container ID 267738f54c87b9b89b4a3539b5e89855ef2327f547140fad8ad6895ca4d4b9f3 from idIndex" id=eaae7d1f-5bd5-4b22-833e-d5a6173fe1c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:49:18 force-systemd-env-760371 crio[838]: time="2025-10-06T19:49:18.429735451Z" level=info msg="createCtr: removing container 267738f54c87b9b89b4a3539b5e89855ef2327f547140fad8ad6895ca4d4b9f3" id=eaae7d1f-5bd5-4b22-833e-d5a6173fe1c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:49:18 force-systemd-env-760371 crio[838]: time="2025-10-06T19:49:18.429773228Z" level=info msg="createCtr: deleting container 267738f54c87b9b89b4a3539b5e89855ef2327f547140fad8ad6895ca4d4b9f3 from storage" id=eaae7d1f-5bd5-4b22-833e-d5a6173fe1c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:49:18 force-systemd-env-760371 crio[838]: time="2025-10-06T19:49:18.432377182Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-force-systemd-env-760371_kube-system_0dbc0c1ac43eb3f35d4b865713c3040d_0" id=eaae7d1f-5bd5-4b22-833e-d5a6173fe1c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:49:19 force-systemd-env-760371 crio[838]: time="2025-10-06T19:49:19.411300671Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=59b26fe4-f644-4d36-a233-dc0a30b1913a name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:49:19 force-systemd-env-760371 crio[838]: time="2025-10-06T19:49:19.412299825Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=ede6e2a9-2533-4b78-bceb-8be23661c559 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:49:19 force-systemd-env-760371 crio[838]: time="2025-10-06T19:49:19.41331653Z" level=info msg="Creating container: kube-system/kube-apiserver-force-systemd-env-760371/kube-apiserver" id=f928ed8f-9ade-4fbd-9130-532e1c66a0fb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:49:19 force-systemd-env-760371 crio[838]: time="2025-10-06T19:49:19.413583463Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:49:19 force-systemd-env-760371 crio[838]: time="2025-10-06T19:49:19.418143746Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:49:19 force-systemd-env-760371 crio[838]: time="2025-10-06T19:49:19.418700358Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:49:19 force-systemd-env-760371 crio[838]: time="2025-10-06T19:49:19.429500516Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=f928ed8f-9ade-4fbd-9130-532e1c66a0fb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:49:19 force-systemd-env-760371 crio[838]: time="2025-10-06T19:49:19.430684189Z" level=info msg="createCtr: deleting container ID 9efc19ecd25595227a91f2e19af05c622640f9ef8f57ee28bfa4b434e19b402d from idIndex" id=f928ed8f-9ade-4fbd-9130-532e1c66a0fb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:49:19 force-systemd-env-760371 crio[838]: time="2025-10-06T19:49:19.430728883Z" level=info msg="createCtr: removing container 9efc19ecd25595227a91f2e19af05c622640f9ef8f57ee28bfa4b434e19b402d" id=f928ed8f-9ade-4fbd-9130-532e1c66a0fb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:49:19 force-systemd-env-760371 crio[838]: time="2025-10-06T19:49:19.430769122Z" level=info msg="createCtr: deleting container 9efc19ecd25595227a91f2e19af05c622640f9ef8f57ee28bfa4b434e19b402d from storage" id=f928ed8f-9ade-4fbd-9130-532e1c66a0fb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:49:19 force-systemd-env-760371 crio[838]: time="2025-10-06T19:49:19.433458402Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-force-systemd-env-760371_kube-system_682655a4019192b9f047beb24aa7a6f4_0" id=f928ed8f-9ade-4fbd-9130-532e1c66a0fb name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1006 19:49:22.213938    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:49:22.214581    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:49:22.216287    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:49:22.216731    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1006 19:49:22.218267    2488 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 6 19:13] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:14] overlayfs: idmapped layers are currently not supported
	[ +11.752506] hrtimer: interrupt took 8273017 ns
	[Oct 6 19:16] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:20] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:21] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:23] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:26] overlayfs: idmapped layers are currently not supported
	[ +26.009516] overlayfs: idmapped layers are currently not supported
	[  +1.884868] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:28] overlayfs: idmapped layers are currently not supported
	[ +38.864395] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:29] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:30] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:31] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:35] overlayfs: idmapped layers are currently not supported
	[ +30.685217] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:37] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:39] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:41] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:48] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 19:49:22 up  1:31,  0 user,  load average: 0.41, 0.75, 1.43
	Linux force-systemd-env-760371 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 06 19:49:13 force-systemd-env-760371 kubelet[1779]: E1006 19:49:13.440493    1779 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 19:49:13 force-systemd-env-760371 kubelet[1779]:         container kube-controller-manager start failed in pod kube-controller-manager-force-systemd-env-760371_kube-system(d46bbcd43549acab7c27b02998ace70a): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 19:49:13 force-systemd-env-760371 kubelet[1779]:  > logger="UnhandledError"
	Oct 06 19:49:13 force-systemd-env-760371 kubelet[1779]: E1006 19:49:13.440524    1779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-force-systemd-env-760371" podUID="d46bbcd43549acab7c27b02998ace70a"
	Oct 06 19:49:16 force-systemd-env-760371 kubelet[1779]: E1006 19:49:16.148310    1779 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.76.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{force-systemd-env-760371.186bfe8390f9725a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:force-systemd-env-760371,UID:force-systemd-env-760371,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node force-systemd-env-760371 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:force-systemd-env-760371,},FirstTimestamp:2025-10-06 19:45:20.444748378 +0000 UTC m=+1.314125121,LastTimestamp:2025-10-06 19:45:20.444748378 +0000 UTC m=+1.314125121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet
,ReportingInstance:force-systemd-env-760371,}"
	Oct 06 19:49:17 force-systemd-env-760371 kubelet[1779]: E1006 19:49:17.053361    1779 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/force-systemd-env-760371?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Oct 06 19:49:17 force-systemd-env-760371 kubelet[1779]: I1006 19:49:17.245142    1779 kubelet_node_status.go:75] "Attempting to register node" node="force-systemd-env-760371"
	Oct 06 19:49:17 force-systemd-env-760371 kubelet[1779]: E1006 19:49:17.245500    1779 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.76.2:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="force-systemd-env-760371"
	Oct 06 19:49:18 force-systemd-env-760371 kubelet[1779]: E1006 19:49:18.410395    1779 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-env-760371\" not found" node="force-systemd-env-760371"
	Oct 06 19:49:18 force-systemd-env-760371 kubelet[1779]: E1006 19:49:18.432672    1779 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 19:49:18 force-systemd-env-760371 kubelet[1779]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 19:49:18 force-systemd-env-760371 kubelet[1779]:  > podSandboxID="4a61d50936646b06a0575daaa15a10b27d9b52ea8d10029d5a80d9908141e5a9"
	Oct 06 19:49:18 force-systemd-env-760371 kubelet[1779]: E1006 19:49:18.432758    1779 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 19:49:18 force-systemd-env-760371 kubelet[1779]:         container etcd start failed in pod etcd-force-systemd-env-760371_kube-system(0dbc0c1ac43eb3f35d4b865713c3040d): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 19:49:18 force-systemd-env-760371 kubelet[1779]:  > logger="UnhandledError"
	Oct 06 19:49:18 force-systemd-env-760371 kubelet[1779]: E1006 19:49:18.432788    1779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-force-systemd-env-760371" podUID="0dbc0c1ac43eb3f35d4b865713c3040d"
	Oct 06 19:49:19 force-systemd-env-760371 kubelet[1779]: E1006 19:49:19.410743    1779 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-env-760371\" not found" node="force-systemd-env-760371"
	Oct 06 19:49:19 force-systemd-env-760371 kubelet[1779]: E1006 19:49:19.433760    1779 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 06 19:49:19 force-systemd-env-760371 kubelet[1779]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 06 19:49:19 force-systemd-env-760371 kubelet[1779]:  > podSandboxID="891119d655b2eb3db0cdd1b9c3827885887dab08d802f5bce8e1f96ee4fa9856"
	Oct 06 19:49:19 force-systemd-env-760371 kubelet[1779]: E1006 19:49:19.433852    1779 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 06 19:49:19 force-systemd-env-760371 kubelet[1779]:         container kube-apiserver start failed in pod kube-apiserver-force-systemd-env-760371_kube-system(682655a4019192b9f047beb24aa7a6f4): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 06 19:49:19 force-systemd-env-760371 kubelet[1779]:  > logger="UnhandledError"
	Oct 06 19:49:19 force-systemd-env-760371 kubelet[1779]: E1006 19:49:19.433885    1779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-force-systemd-env-760371" podUID="682655a4019192b9f047beb24aa7a6f4"
	Oct 06 19:49:20 force-systemd-env-760371 kubelet[1779]: E1006 19:49:20.473758    1779 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"force-systemd-env-760371\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-760371 -n force-systemd-env-760371
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-760371 -n force-systemd-env-760371: exit status 6 (345.742629ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 19:49:22.670237  181706 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-760371" does not appear in /home/jenkins/minikube-integration/21701-2540/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "force-systemd-env-760371" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-760371" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-760371
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-760371: (1.99661868s)
--- FAIL: TestForceSystemdEnv (512.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-184058 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-184058 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-4mdj6" [9f5a428c-21b9-4e37-9764-178633711f58] Pending
helpers_test.go:352: "hello-node-connect-7d85dfc575-4mdj6" [9f5a428c-21b9-4e37-9764-178633711f58] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-184058 -n functional-184058
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-06 19:03:06.937373384 +0000 UTC m=+1296.456440464
functional_test.go:1645: (dbg) Run:  kubectl --context functional-184058 describe po hello-node-connect-7d85dfc575-4mdj6 -n default
functional_test.go:1645: (dbg) kubectl --context functional-184058 describe po hello-node-connect-7d85dfc575-4mdj6 -n default:
Name:             hello-node-connect-7d85dfc575-4mdj6
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-184058/192.168.49.2
Start Time:       Mon, 06 Oct 2025 18:53:06 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bpp7c (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-bpp7c:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-4mdj6 to functional-184058
Normal   Pulling    7m (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m55s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m55s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-184058 logs hello-node-connect-7d85dfc575-4mdj6 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-184058 logs hello-node-connect-7d85dfc575-4mdj6 -n default: exit status 1 (102.738824ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-4mdj6" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-184058 logs hello-node-connect-7d85dfc575-4mdj6 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-184058 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-4mdj6
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-184058/192.168.49.2
Start Time:       Mon, 06 Oct 2025 18:53:06 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bpp7c (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-bpp7c:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-4mdj6 to functional-184058
Normal   Pulling    7m (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m55s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m55s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-184058 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-184058 logs -l app=hello-node-connect: exit status 1 (87.45214ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-4mdj6" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-184058 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-184058 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.98.108.7
IPs:                      10.98.108.7
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30233/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-184058
helpers_test.go:243: (dbg) docker inspect functional-184058:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3be9348a91144888f68938f92b27d3fc533b4d1685da306a82f06be5fc9b6f80",
	        "Created": "2025-10-06T18:50:03.987052571Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 20087,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T18:50:04.021532581Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/3be9348a91144888f68938f92b27d3fc533b4d1685da306a82f06be5fc9b6f80/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3be9348a91144888f68938f92b27d3fc533b4d1685da306a82f06be5fc9b6f80/hostname",
	        "HostsPath": "/var/lib/docker/containers/3be9348a91144888f68938f92b27d3fc533b4d1685da306a82f06be5fc9b6f80/hosts",
	        "LogPath": "/var/lib/docker/containers/3be9348a91144888f68938f92b27d3fc533b4d1685da306a82f06be5fc9b6f80/3be9348a91144888f68938f92b27d3fc533b4d1685da306a82f06be5fc9b6f80-json.log",
	        "Name": "/functional-184058",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-184058:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-184058",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3be9348a91144888f68938f92b27d3fc533b4d1685da306a82f06be5fc9b6f80",
	                "LowerDir": "/var/lib/docker/overlay2/41ff05a446ce0c66ace4d4830b32295cf0a222d2ee06b35dfbdd9288a1793273-init/diff:/var/lib/docker/overlay2/950dad8a7486c25883a9845454dded3949e8f3a53166d005e6749ff210bcab80/diff",
	                "MergedDir": "/var/lib/docker/overlay2/41ff05a446ce0c66ace4d4830b32295cf0a222d2ee06b35dfbdd9288a1793273/merged",
	                "UpperDir": "/var/lib/docker/overlay2/41ff05a446ce0c66ace4d4830b32295cf0a222d2ee06b35dfbdd9288a1793273/diff",
	                "WorkDir": "/var/lib/docker/overlay2/41ff05a446ce0c66ace4d4830b32295cf0a222d2ee06b35dfbdd9288a1793273/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-184058",
	                "Source": "/var/lib/docker/volumes/functional-184058/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-184058",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-184058",
	                "name.minikube.sigs.k8s.io": "functional-184058",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c72d1a2cce0cce8d8101e9b157d84fa3292f10ace0c20e295d8f4a961431e8ac",
	            "SandboxKey": "/var/run/docker/netns/c72d1a2cce0c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-184058": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:51:fa:22:4b:c2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "620224da0e6fb66a2a5122c0a73195fe1c879a334bcc570f1cd4ad048e69a81d",
	                    "EndpointID": "1eb849fd570a19b5a47bee7dce5f72a68e1092838577a5452841a35db6b13546",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-184058",
	                        "3be9348a9114"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-184058 -n functional-184058
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-184058 logs -n 25: (1.44767753s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-184058 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-184058 │ jenkins │ v1.37.0 │ 06 Oct 25 18:51 UTC │ 06 Oct 25 18:51 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 06 Oct 25 18:51 UTC │ 06 Oct 25 18:51 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 06 Oct 25 18:51 UTC │ 06 Oct 25 18:51 UTC │
	│ kubectl │ functional-184058 kubectl -- --context functional-184058 get pods                                                          │ functional-184058 │ jenkins │ v1.37.0 │ 06 Oct 25 18:51 UTC │ 06 Oct 25 18:51 UTC │
	│ start   │ -p functional-184058 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-184058 │ jenkins │ v1.37.0 │ 06 Oct 25 18:51 UTC │ 06 Oct 25 18:52 UTC │
	│ service │ invalid-svc -p functional-184058                                                                                           │ functional-184058 │ jenkins │ v1.37.0 │ 06 Oct 25 18:52 UTC │                     │
	│ config  │ functional-184058 config unset cpus                                                                                        │ functional-184058 │ jenkins │ v1.37.0 │ 06 Oct 25 18:52 UTC │ 06 Oct 25 18:52 UTC │
	│ cp      │ functional-184058 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-184058 │ jenkins │ v1.37.0 │ 06 Oct 25 18:52 UTC │ 06 Oct 25 18:52 UTC │
	│ config  │ functional-184058 config get cpus                                                                                          │ functional-184058 │ jenkins │ v1.37.0 │ 06 Oct 25 18:52 UTC │                     │
	│ config  │ functional-184058 config set cpus 2                                                                                        │ functional-184058 │ jenkins │ v1.37.0 │ 06 Oct 25 18:52 UTC │ 06 Oct 25 18:52 UTC │
	│ config  │ functional-184058 config get cpus                                                                                          │ functional-184058 │ jenkins │ v1.37.0 │ 06 Oct 25 18:52 UTC │ 06 Oct 25 18:52 UTC │
	│ config  │ functional-184058 config unset cpus                                                                                        │ functional-184058 │ jenkins │ v1.37.0 │ 06 Oct 25 18:52 UTC │ 06 Oct 25 18:52 UTC │
	│ config  │ functional-184058 config get cpus                                                                                          │ functional-184058 │ jenkins │ v1.37.0 │ 06 Oct 25 18:52 UTC │                     │
	│ ssh     │ functional-184058 ssh -n functional-184058 sudo cat /home/docker/cp-test.txt                                               │ functional-184058 │ jenkins │ v1.37.0 │ 06 Oct 25 18:52 UTC │ 06 Oct 25 18:52 UTC │
	│ ssh     │ functional-184058 ssh echo hello                                                                                           │ functional-184058 │ jenkins │ v1.37.0 │ 06 Oct 25 18:52 UTC │ 06 Oct 25 18:52 UTC │
	│ ssh     │ functional-184058 ssh cat /etc/hostname                                                                                    │ functional-184058 │ jenkins │ v1.37.0 │ 06 Oct 25 18:52 UTC │ 06 Oct 25 18:52 UTC │
	│ cp      │ functional-184058 cp functional-184058:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2116690359/001/cp-test.txt │ functional-184058 │ jenkins │ v1.37.0 │ 06 Oct 25 18:52 UTC │ 06 Oct 25 18:52 UTC │
	│ tunnel  │ functional-184058 tunnel --alsologtostderr                                                                                 │ functional-184058 │ jenkins │ v1.37.0 │ 06 Oct 25 18:52 UTC │                     │
	│ tunnel  │ functional-184058 tunnel --alsologtostderr                                                                                 │ functional-184058 │ jenkins │ v1.37.0 │ 06 Oct 25 18:52 UTC │                     │
	│ ssh     │ functional-184058 ssh -n functional-184058 sudo cat /home/docker/cp-test.txt                                               │ functional-184058 │ jenkins │ v1.37.0 │ 06 Oct 25 18:52 UTC │ 06 Oct 25 18:52 UTC │
	│ cp      │ functional-184058 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-184058 │ jenkins │ v1.37.0 │ 06 Oct 25 18:52 UTC │ 06 Oct 25 18:52 UTC │
	│ tunnel  │ functional-184058 tunnel --alsologtostderr                                                                                 │ functional-184058 │ jenkins │ v1.37.0 │ 06 Oct 25 18:52 UTC │                     │
	│ ssh     │ functional-184058 ssh -n functional-184058 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-184058 │ jenkins │ v1.37.0 │ 06 Oct 25 18:52 UTC │ 06 Oct 25 18:52 UTC │
	│ addons  │ functional-184058 addons list                                                                                              │ functional-184058 │ jenkins │ v1.37.0 │ 06 Oct 25 18:53 UTC │ 06 Oct 25 18:53 UTC │
	│ addons  │ functional-184058 addons list -o json                                                                                      │ functional-184058 │ jenkins │ v1.37.0 │ 06 Oct 25 18:53 UTC │ 06 Oct 25 18:53 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 18:51:55
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 18:51:55.238524   24245 out.go:360] Setting OutFile to fd 1 ...
	I1006 18:51:55.238939   24245 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:51:55.238944   24245 out.go:374] Setting ErrFile to fd 2...
	I1006 18:51:55.238948   24245 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:51:55.239623   24245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 18:51:55.240151   24245 out.go:368] Setting JSON to false
	I1006 18:51:55.241076   24245 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":2051,"bootTime":1759774665,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1006 18:51:55.241222   24245 start.go:140] virtualization:  
	I1006 18:51:55.244694   24245 out.go:179] * [functional-184058] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 18:51:55.246966   24245 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 18:51:55.247076   24245 notify.go:220] Checking for updates...
	I1006 18:51:55.252795   24245 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 18:51:55.255890   24245 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 18:51:55.258676   24245 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	I1006 18:51:55.261503   24245 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 18:51:55.264381   24245 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 18:51:55.267834   24245 config.go:182] Loaded profile config "functional-184058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:51:55.267929   24245 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 18:51:55.298685   24245 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 18:51:55.298796   24245 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 18:51:55.359331   24245 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-06 18:51:55.350422426 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 18:51:55.359444   24245 docker.go:318] overlay module found
	I1006 18:51:55.362366   24245 out.go:179] * Using the docker driver based on existing profile
	I1006 18:51:55.365184   24245 start.go:304] selected driver: docker
	I1006 18:51:55.365192   24245 start.go:924] validating driver "docker" against &{Name:functional-184058 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-184058 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 18:51:55.365273   24245 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 18:51:55.365376   24245 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 18:51:55.430304   24245 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-06 18:51:55.421283613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 18:51:55.430707   24245 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 18:51:55.430730   24245 cni.go:84] Creating CNI manager for ""
	I1006 18:51:55.430786   24245 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 18:51:55.430826   24245 start.go:348] cluster config:
	{Name:functional-184058 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-184058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 18:51:55.434057   24245 out.go:179] * Starting "functional-184058" primary control-plane node in "functional-184058" cluster
	I1006 18:51:55.437034   24245 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 18:51:55.439995   24245 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 18:51:55.442776   24245 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 18:51:55.442823   24245 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1006 18:51:55.442830   24245 cache.go:58] Caching tarball of preloaded images
	I1006 18:51:55.442870   24245 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 18:51:55.442919   24245 preload.go:233] Found /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1006 18:51:55.442928   24245 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 18:51:55.443048   24245 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/config.json ...
	I1006 18:51:55.463094   24245 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 18:51:55.463105   24245 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 18:51:55.463117   24245 cache.go:232] Successfully downloaded all kic artifacts
	I1006 18:51:55.463137   24245 start.go:360] acquireMachinesLock for functional-184058: {Name:mke1368579a2be8708d1c809bee0343324190745 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 18:51:55.463191   24245 start.go:364] duration metric: took 38.491µs to acquireMachinesLock for "functional-184058"
	I1006 18:51:55.463208   24245 start.go:96] Skipping create...Using existing machine configuration
	I1006 18:51:55.463231   24245 fix.go:54] fixHost starting: 
	I1006 18:51:55.463576   24245 cli_runner.go:164] Run: docker container inspect functional-184058 --format={{.State.Status}}
	I1006 18:51:55.480803   24245 fix.go:112] recreateIfNeeded on functional-184058: state=Running err=<nil>
	W1006 18:51:55.480823   24245 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 18:51:55.484301   24245 out.go:252] * Updating the running docker "functional-184058" container ...
	I1006 18:51:55.484327   24245 machine.go:93] provisionDockerMachine start ...
	I1006 18:51:55.484423   24245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-184058
	I1006 18:51:55.501369   24245 main.go:141] libmachine: Using SSH client type: native
	I1006 18:51:55.501697   24245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1006 18:51:55.501704   24245 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 18:51:55.635440   24245 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-184058
	
	I1006 18:51:55.635452   24245 ubuntu.go:182] provisioning hostname "functional-184058"
	I1006 18:51:55.635527   24245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-184058
	I1006 18:51:55.659414   24245 main.go:141] libmachine: Using SSH client type: native
	I1006 18:51:55.659739   24245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1006 18:51:55.659749   24245 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-184058 && echo "functional-184058" | sudo tee /etc/hostname
	I1006 18:51:55.804801   24245 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-184058
	
	I1006 18:51:55.804882   24245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-184058
	I1006 18:51:55.822991   24245 main.go:141] libmachine: Using SSH client type: native
	I1006 18:51:55.823294   24245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1006 18:51:55.823311   24245 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-184058' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-184058/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-184058' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 18:51:55.956142   24245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 18:51:55.956156   24245 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-2540/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-2540/.minikube}
	I1006 18:51:55.956190   24245 ubuntu.go:190] setting up certificates
	I1006 18:51:55.956199   24245 provision.go:84] configureAuth start
	I1006 18:51:55.956256   24245 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-184058
	I1006 18:51:55.975599   24245 provision.go:143] copyHostCerts
	I1006 18:51:55.975668   24245 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem, removing ...
	I1006 18:51:55.975684   24245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem
	I1006 18:51:55.975908   24245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem (1082 bytes)
	I1006 18:51:55.976027   24245 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem, removing ...
	I1006 18:51:55.976037   24245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem
	I1006 18:51:55.976069   24245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem (1123 bytes)
	I1006 18:51:55.976133   24245 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem, removing ...
	I1006 18:51:55.976136   24245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem
	I1006 18:51:55.976159   24245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem (1675 bytes)
	I1006 18:51:55.976216   24245 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem org=jenkins.functional-184058 san=[127.0.0.1 192.168.49.2 functional-184058 localhost minikube]
	I1006 18:51:56.683694   24245 provision.go:177] copyRemoteCerts
	I1006 18:51:56.683766   24245 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 18:51:56.683804   24245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-184058
	I1006 18:51:56.704536   24245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/functional-184058/id_rsa Username:docker}
	I1006 18:51:56.799553   24245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 18:51:56.817728   24245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1006 18:51:56.836026   24245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 18:51:56.854096   24245 provision.go:87] duration metric: took 897.872926ms to configureAuth
	I1006 18:51:56.854125   24245 ubuntu.go:206] setting minikube options for container-runtime
	I1006 18:51:56.854363   24245 config.go:182] Loaded profile config "functional-184058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:51:56.854475   24245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-184058
	I1006 18:51:56.871896   24245 main.go:141] libmachine: Using SSH client type: native
	I1006 18:51:56.872203   24245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1006 18:51:56.872214   24245 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 18:52:02.231888   24245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 18:52:02.231900   24245 machine.go:96] duration metric: took 6.747567129s to provisionDockerMachine
	I1006 18:52:02.231909   24245 start.go:293] postStartSetup for "functional-184058" (driver="docker")
	I1006 18:52:02.231919   24245 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 18:52:02.231992   24245 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 18:52:02.232031   24245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-184058
	I1006 18:52:02.249302   24245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/functional-184058/id_rsa Username:docker}
	I1006 18:52:02.343836   24245 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 18:52:02.347463   24245 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 18:52:02.347481   24245 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 18:52:02.347490   24245 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/addons for local assets ...
	I1006 18:52:02.347544   24245 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/files for local assets ...
	I1006 18:52:02.347623   24245 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> 43502.pem in /etc/ssl/certs
	I1006 18:52:02.347717   24245 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/test/nested/copy/4350/hosts -> hosts in /etc/test/nested/copy/4350
	I1006 18:52:02.347772   24245 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4350
	I1006 18:52:02.355610   24245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /etc/ssl/certs/43502.pem (1708 bytes)
	I1006 18:52:02.374762   24245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/test/nested/copy/4350/hosts --> /etc/test/nested/copy/4350/hosts (40 bytes)
	I1006 18:52:02.394251   24245 start.go:296] duration metric: took 162.327846ms for postStartSetup
	I1006 18:52:02.394320   24245 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 18:52:02.394374   24245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-184058
	I1006 18:52:02.411790   24245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/functional-184058/id_rsa Username:docker}
	I1006 18:52:02.509127   24245 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 18:52:02.515528   24245 fix.go:56] duration metric: took 7.052307906s for fixHost
	I1006 18:52:02.515543   24245 start.go:83] releasing machines lock for "functional-184058", held for 7.052344944s
	I1006 18:52:02.515628   24245 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-184058
	I1006 18:52:02.533682   24245 ssh_runner.go:195] Run: cat /version.json
	I1006 18:52:02.533725   24245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-184058
	I1006 18:52:02.533975   24245 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 18:52:02.534024   24245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-184058
	I1006 18:52:02.555529   24245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/functional-184058/id_rsa Username:docker}
	I1006 18:52:02.561325   24245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/functional-184058/id_rsa Username:docker}
	I1006 18:52:02.744636   24245 ssh_runner.go:195] Run: systemctl --version
	I1006 18:52:02.751251   24245 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 18:52:02.787269   24245 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 18:52:02.792114   24245 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 18:52:02.792175   24245 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 18:52:02.800093   24245 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 18:52:02.800122   24245 start.go:495] detecting cgroup driver to use...
	I1006 18:52:02.800163   24245 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 18:52:02.800211   24245 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 18:52:02.816650   24245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 18:52:02.829956   24245 docker.go:218] disabling cri-docker service (if available) ...
	I1006 18:52:02.830008   24245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 18:52:02.845248   24245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 18:52:02.858607   24245 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 18:52:03.002446   24245 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 18:52:03.143495   24245 docker.go:234] disabling docker service ...
	I1006 18:52:03.143562   24245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 18:52:03.160071   24245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 18:52:03.173764   24245 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 18:52:03.301326   24245 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 18:52:03.437984   24245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 18:52:03.450778   24245 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 18:52:03.466095   24245 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 18:52:03.466149   24245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 18:52:03.475925   24245 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 18:52:03.475979   24245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 18:52:03.485590   24245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 18:52:03.494464   24245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 18:52:03.503863   24245 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 18:52:03.512032   24245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 18:52:03.521421   24245 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 18:52:03.529947   24245 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 18:52:03.539104   24245 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 18:52:03.546810   24245 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 18:52:03.554459   24245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 18:52:03.681078   24245 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 18:52:10.194318   24245 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.513218257s)
	I1006 18:52:10.194334   24245 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 18:52:10.194382   24245 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 18:52:10.198654   24245 start.go:563] Will wait 60s for crictl version
	I1006 18:52:10.198708   24245 ssh_runner.go:195] Run: which crictl
	I1006 18:52:10.202354   24245 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 18:52:10.226462   24245 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 18:52:10.226554   24245 ssh_runner.go:195] Run: crio --version
	I1006 18:52:10.255118   24245 ssh_runner.go:195] Run: crio --version
	I1006 18:52:10.287106   24245 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 18:52:10.290043   24245 cli_runner.go:164] Run: docker network inspect functional-184058 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 18:52:10.306502   24245 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 18:52:10.313641   24245 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1006 18:52:10.316439   24245 kubeadm.go:883] updating cluster {Name:functional-184058 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-184058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 18:52:10.316566   24245 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 18:52:10.316642   24245 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 18:52:10.349360   24245 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 18:52:10.349371   24245 crio.go:433] Images already preloaded, skipping extraction
	I1006 18:52:10.349423   24245 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 18:52:10.374792   24245 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 18:52:10.374803   24245 cache_images.go:85] Images are preloaded, skipping loading
	I1006 18:52:10.374809   24245 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1006 18:52:10.374910   24245 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-184058 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-184058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 18:52:10.374990   24245 ssh_runner.go:195] Run: crio config
	I1006 18:52:10.425530   24245 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1006 18:52:10.425550   24245 cni.go:84] Creating CNI manager for ""
	I1006 18:52:10.425558   24245 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 18:52:10.425566   24245 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 18:52:10.425587   24245 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-184058 NodeName:functional-184058 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 18:52:10.425704   24245 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-184058"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 18:52:10.425774   24245 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 18:52:10.433649   24245 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 18:52:10.433721   24245 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 18:52:10.441434   24245 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1006 18:52:10.454423   24245 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 18:52:10.467508   24245 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1006 18:52:10.480831   24245 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 18:52:10.484850   24245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 18:52:10.613218   24245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 18:52:10.626994   24245 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058 for IP: 192.168.49.2
	I1006 18:52:10.627005   24245 certs.go:195] generating shared ca certs ...
	I1006 18:52:10.627019   24245 certs.go:227] acquiring lock for ca certs: {Name:mke29bef1b13829052576090d66d8864f7cbc64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:52:10.627147   24245 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key
	I1006 18:52:10.627187   24245 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key
	I1006 18:52:10.627192   24245 certs.go:257] generating profile certs ...
	I1006 18:52:10.627269   24245 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.key
	I1006 18:52:10.627317   24245 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/apiserver.key.7a1c3fba
	I1006 18:52:10.627351   24245 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/proxy-client.key
	I1006 18:52:10.627453   24245 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem (1338 bytes)
	W1006 18:52:10.627478   24245 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350_empty.pem, impossibly tiny 0 bytes
	I1006 18:52:10.627484   24245 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 18:52:10.627505   24245 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem (1082 bytes)
	I1006 18:52:10.627524   24245 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem (1123 bytes)
	I1006 18:52:10.627549   24245 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem (1675 bytes)
	I1006 18:52:10.627589   24245 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem (1708 bytes)
	I1006 18:52:10.628223   24245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 18:52:10.646358   24245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 18:52:10.664626   24245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 18:52:10.682929   24245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 18:52:10.700737   24245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 18:52:10.717545   24245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 18:52:10.734941   24245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 18:52:10.751911   24245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 18:52:10.769452   24245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /usr/share/ca-certificates/43502.pem (1708 bytes)
	I1006 18:52:10.786748   24245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 18:52:10.804210   24245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem --> /usr/share/ca-certificates/4350.pem (1338 bytes)
	I1006 18:52:10.822040   24245 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 18:52:10.834282   24245 ssh_runner.go:195] Run: openssl version
	I1006 18:52:10.840545   24245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 18:52:10.849066   24245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 18:52:10.852880   24245 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I1006 18:52:10.852933   24245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 18:52:10.894332   24245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 18:52:10.902348   24245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4350.pem && ln -fs /usr/share/ca-certificates/4350.pem /etc/ssl/certs/4350.pem"
	I1006 18:52:10.910445   24245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4350.pem
	I1006 18:52:10.914103   24245 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 18:49 /usr/share/ca-certificates/4350.pem
	I1006 18:52:10.914156   24245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4350.pem
	I1006 18:52:10.955018   24245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4350.pem /etc/ssl/certs/51391683.0"
	I1006 18:52:10.962936   24245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43502.pem && ln -fs /usr/share/ca-certificates/43502.pem /etc/ssl/certs/43502.pem"
	I1006 18:52:10.971150   24245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43502.pem
	I1006 18:52:10.975620   24245 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 18:49 /usr/share/ca-certificates/43502.pem
	I1006 18:52:10.975672   24245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43502.pem
	I1006 18:52:11.016902   24245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43502.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 18:52:11.024938   24245 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 18:52:11.032266   24245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 18:52:11.074556   24245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 18:52:11.116508   24245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 18:52:11.161061   24245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 18:52:11.242198   24245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 18:52:11.336361   24245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 18:52:11.426370   24245 kubeadm.go:400] StartCluster: {Name:functional-184058 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-184058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 18:52:11.426465   24245 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 18:52:11.426532   24245 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 18:52:11.542098   24245 cri.go:89] found id: "e6895c573774e6e7180f0559e10166baee4d1c4d5bd86472d6a9602e02714f5e"
	I1006 18:52:11.542109   24245 cri.go:89] found id: "321abf76c5e51ee89f16ce91b6586ef4ed13171d1a0b3a44af3eec05cfde6e07"
	I1006 18:52:11.542125   24245 cri.go:89] found id: "3927de176f35eafa33a9375fc86411e70971d2402f573001b2ee67b0662f0b5f"
	I1006 18:52:11.542128   24245 cri.go:89] found id: "9422504570c94447b34c12d649d33cc5d0891b9e3883159aace433341cb1be22"
	I1006 18:52:11.542130   24245 cri.go:89] found id: "e70faaabdba784f6c99a0f7fc95654a04f3bc7a8cc53ceb69bd35b5742ed2de1"
	I1006 18:52:11.542133   24245 cri.go:89] found id: "add3a599da1c71285fc5d4b7843970bbcd62635d0a1611dbc1758c5047780184"
	I1006 18:52:11.542139   24245 cri.go:89] found id: "8b6fbf87813dc49c52927b29f30090b61c6ab0b9d9b8017ffa60e89189ac0a05"
	I1006 18:52:11.542142   24245 cri.go:89] found id: "855662367b2584f2b2f6f0a9996df014844358f3c20380a0a7bec2e6e6284d3c"
	I1006 18:52:11.542144   24245 cri.go:89] found id: "d279e26171cf47651b8f575497a0038bf49297bf86f25ae8eed84b25b61475d8"
	I1006 18:52:11.542149   24245 cri.go:89] found id: "708bbc7d30c0d4e68a799d8a4d0237d15505381f9146e1e717d7bfa7c1fb5070"
	I1006 18:52:11.542151   24245 cri.go:89] found id: "63c2e95720bba851bdb4b8a1291056277d27bf2b3bad8be9503de13e3e3c977d"
	I1006 18:52:11.542155   24245 cri.go:89] found id: "d5b1c9d62c926b0a22daa9a02315d40ffbbe0a17f63559c7ee365b7a121fbc24"
	I1006 18:52:11.542157   24245 cri.go:89] found id: "8a33406c6421af2f56cf71141954e27adf314b68e3c08b6b394117b9ffeb6057"
	I1006 18:52:11.542159   24245 cri.go:89] found id: "293f5b3e499843f712a6bc8a56b06e2718a5fe8d08e105e4602e7ec23a7de72d"
	I1006 18:52:11.542161   24245 cri.go:89] found id: "5e1a232ffd66a70447d3f37138d04a0efc773728fa3380e2c0c9500d09c8cf35"
	I1006 18:52:11.542170   24245 cri.go:89] found id: "6ae1fb69a9317afcb9d9e1685b15974d382503a0ca15a57f1d3c24ceb3005a98"
	I1006 18:52:11.542173   24245 cri.go:89] found id: "85ddd1f86045bdf433f3cbc59576d02d5b6fc60d577cfa1550e5297205efc108"
	I1006 18:52:11.542179   24245 cri.go:89] found id: "43c47efbebcf2230add1dc2d6f37ca73ccc5ad98fc4d6cb574978ec3b3ba710a"
	I1006 18:52:11.542181   24245 cri.go:89] found id: ""
	I1006 18:52:11.542232   24245 ssh_runner.go:195] Run: sudo runc list -f json
	W1006 18:52:11.572091   24245 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:52:11Z" level=error msg="open /run/runc: no such file or directory"
	I1006 18:52:11.572161   24245 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 18:52:11.597120   24245 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 18:52:11.597149   24245 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 18:52:11.597195   24245 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 18:52:11.606607   24245 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 18:52:11.607202   24245 kubeconfig.go:125] found "functional-184058" server: "https://192.168.49.2:8441"
	I1006 18:52:11.608839   24245 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 18:52:11.631344   24245 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-06 18:50:15.141313710 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-06 18:52:10.472871183 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1006 18:52:11.631363   24245 kubeadm.go:1160] stopping kube-system containers ...
	I1006 18:52:11.631374   24245 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1006 18:52:11.631434   24245 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 18:52:11.739076   24245 cri.go:89] found id: "b4dce4fed2ae7530c46ef1d8aedaa3356d624c4df65a16b32d3b27fa33003ea6"
	I1006 18:52:11.739097   24245 cri.go:89] found id: "672d3554a562ecbb977ae6d14c2571cf8b4a2bc4c6adfddfa6477e70cb8ed908"
	I1006 18:52:11.739101   24245 cri.go:89] found id: "dfd3ea9d7c73990b01a6a03b947b558e6c80862be3dec9edb3a425476d4e0190"
	I1006 18:52:11.739104   24245 cri.go:89] found id: "e6895c573774e6e7180f0559e10166baee4d1c4d5bd86472d6a9602e02714f5e"
	I1006 18:52:11.739106   24245 cri.go:89] found id: "321abf76c5e51ee89f16ce91b6586ef4ed13171d1a0b3a44af3eec05cfde6e07"
	I1006 18:52:11.739110   24245 cri.go:89] found id: "3927de176f35eafa33a9375fc86411e70971d2402f573001b2ee67b0662f0b5f"
	I1006 18:52:11.739112   24245 cri.go:89] found id: "9422504570c94447b34c12d649d33cc5d0891b9e3883159aace433341cb1be22"
	I1006 18:52:11.739115   24245 cri.go:89] found id: "e70faaabdba784f6c99a0f7fc95654a04f3bc7a8cc53ceb69bd35b5742ed2de1"
	I1006 18:52:11.739117   24245 cri.go:89] found id: "add3a599da1c71285fc5d4b7843970bbcd62635d0a1611dbc1758c5047780184"
	I1006 18:52:11.739130   24245 cri.go:89] found id: "8b6fbf87813dc49c52927b29f30090b61c6ab0b9d9b8017ffa60e89189ac0a05"
	I1006 18:52:11.739132   24245 cri.go:89] found id: "855662367b2584f2b2f6f0a9996df014844358f3c20380a0a7bec2e6e6284d3c"
	I1006 18:52:11.739135   24245 cri.go:89] found id: "d279e26171cf47651b8f575497a0038bf49297bf86f25ae8eed84b25b61475d8"
	I1006 18:52:11.739137   24245 cri.go:89] found id: "708bbc7d30c0d4e68a799d8a4d0237d15505381f9146e1e717d7bfa7c1fb5070"
	I1006 18:52:11.739139   24245 cri.go:89] found id: "63c2e95720bba851bdb4b8a1291056277d27bf2b3bad8be9503de13e3e3c977d"
	I1006 18:52:11.739141   24245 cri.go:89] found id: "d5b1c9d62c926b0a22daa9a02315d40ffbbe0a17f63559c7ee365b7a121fbc24"
	I1006 18:52:11.739145   24245 cri.go:89] found id: "85ddd1f86045bdf433f3cbc59576d02d5b6fc60d577cfa1550e5297205efc108"
	I1006 18:52:11.739147   24245 cri.go:89] found id: ""
	I1006 18:52:11.739152   24245 cri.go:252] Stopping containers: [b4dce4fed2ae7530c46ef1d8aedaa3356d624c4df65a16b32d3b27fa33003ea6 672d3554a562ecbb977ae6d14c2571cf8b4a2bc4c6adfddfa6477e70cb8ed908 dfd3ea9d7c73990b01a6a03b947b558e6c80862be3dec9edb3a425476d4e0190 e6895c573774e6e7180f0559e10166baee4d1c4d5bd86472d6a9602e02714f5e 321abf76c5e51ee89f16ce91b6586ef4ed13171d1a0b3a44af3eec05cfde6e07 3927de176f35eafa33a9375fc86411e70971d2402f573001b2ee67b0662f0b5f 9422504570c94447b34c12d649d33cc5d0891b9e3883159aace433341cb1be22 e70faaabdba784f6c99a0f7fc95654a04f3bc7a8cc53ceb69bd35b5742ed2de1 add3a599da1c71285fc5d4b7843970bbcd62635d0a1611dbc1758c5047780184 8b6fbf87813dc49c52927b29f30090b61c6ab0b9d9b8017ffa60e89189ac0a05 855662367b2584f2b2f6f0a9996df014844358f3c20380a0a7bec2e6e6284d3c d279e26171cf47651b8f575497a0038bf49297bf86f25ae8eed84b25b61475d8 708bbc7d30c0d4e68a799d8a4d0237d15505381f9146e1e717d7bfa7c1fb5070 63c2e95720bba851bdb4b8a1291056277d27bf2b3bad8be9503de13e3e3c977d d5b1c9d62c926b0a22daa9a02315d40ffbbe0a17f
63559c7ee365b7a121fbc24 85ddd1f86045bdf433f3cbc59576d02d5b6fc60d577cfa1550e5297205efc108]
	I1006 18:52:11.739211   24245 ssh_runner.go:195] Run: which crictl
	I1006 18:52:11.743670   24245 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 b4dce4fed2ae7530c46ef1d8aedaa3356d624c4df65a16b32d3b27fa33003ea6 672d3554a562ecbb977ae6d14c2571cf8b4a2bc4c6adfddfa6477e70cb8ed908 dfd3ea9d7c73990b01a6a03b947b558e6c80862be3dec9edb3a425476d4e0190 e6895c573774e6e7180f0559e10166baee4d1c4d5bd86472d6a9602e02714f5e 321abf76c5e51ee89f16ce91b6586ef4ed13171d1a0b3a44af3eec05cfde6e07 3927de176f35eafa33a9375fc86411e70971d2402f573001b2ee67b0662f0b5f 9422504570c94447b34c12d649d33cc5d0891b9e3883159aace433341cb1be22 e70faaabdba784f6c99a0f7fc95654a04f3bc7a8cc53ceb69bd35b5742ed2de1 add3a599da1c71285fc5d4b7843970bbcd62635d0a1611dbc1758c5047780184 8b6fbf87813dc49c52927b29f30090b61c6ab0b9d9b8017ffa60e89189ac0a05 855662367b2584f2b2f6f0a9996df014844358f3c20380a0a7bec2e6e6284d3c d279e26171cf47651b8f575497a0038bf49297bf86f25ae8eed84b25b61475d8 708bbc7d30c0d4e68a799d8a4d0237d15505381f9146e1e717d7bfa7c1fb5070 63c2e95720bba851bdb4b8a1291056277d27bf2b3bad8be9503de13e3e3c977d d5b1c9
d62c926b0a22daa9a02315d40ffbbe0a17f63559c7ee365b7a121fbc24 85ddd1f86045bdf433f3cbc59576d02d5b6fc60d577cfa1550e5297205efc108
	I1006 18:52:23.341004   24245 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl stop --timeout=10 b4dce4fed2ae7530c46ef1d8aedaa3356d624c4df65a16b32d3b27fa33003ea6 672d3554a562ecbb977ae6d14c2571cf8b4a2bc4c6adfddfa6477e70cb8ed908 dfd3ea9d7c73990b01a6a03b947b558e6c80862be3dec9edb3a425476d4e0190 e6895c573774e6e7180f0559e10166baee4d1c4d5bd86472d6a9602e02714f5e 321abf76c5e51ee89f16ce91b6586ef4ed13171d1a0b3a44af3eec05cfde6e07 3927de176f35eafa33a9375fc86411e70971d2402f573001b2ee67b0662f0b5f 9422504570c94447b34c12d649d33cc5d0891b9e3883159aace433341cb1be22 e70faaabdba784f6c99a0f7fc95654a04f3bc7a8cc53ceb69bd35b5742ed2de1 add3a599da1c71285fc5d4b7843970bbcd62635d0a1611dbc1758c5047780184 8b6fbf87813dc49c52927b29f30090b61c6ab0b9d9b8017ffa60e89189ac0a05 855662367b2584f2b2f6f0a9996df014844358f3c20380a0a7bec2e6e6284d3c d279e26171cf47651b8f575497a0038bf49297bf86f25ae8eed84b25b61475d8 708bbc7d30c0d4e68a799d8a4d0237d15505381f9146e1e717d7bfa7c1fb5070 63c2e95720bba851bdb4b8a1291056277d27bf2b3bad8be9503de13e3e3c977d
d5b1c9d62c926b0a22daa9a02315d40ffbbe0a17f63559c7ee365b7a121fbc24 85ddd1f86045bdf433f3cbc59576d02d5b6fc60d577cfa1550e5297205efc108: (11.597300369s)
	I1006 18:52:23.341067   24245 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1006 18:52:23.466593   24245 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 18:52:23.474897   24245 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  6 18:50 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  6 18:50 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct  6 18:50 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct  6 18:50 /etc/kubernetes/scheduler.conf
	
	I1006 18:52:23.474964   24245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1006 18:52:23.482970   24245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1006 18:52:23.490783   24245 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1006 18:52:23.490835   24245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 18:52:23.498578   24245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1006 18:52:23.506618   24245 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1006 18:52:23.506678   24245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 18:52:23.514227   24245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1006 18:52:23.522045   24245 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1006 18:52:23.522099   24245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 18:52:23.529932   24245 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 18:52:23.538001   24245 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 18:52:23.584863   24245 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 18:52:25.968649   24245 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.38376274s)
	I1006 18:52:25.968713   24245 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1006 18:52:26.194825   24245 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 18:52:26.257738   24245 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1006 18:52:26.321468   24245 api_server.go:52] waiting for apiserver process to appear ...
	I1006 18:52:26.321528   24245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 18:52:26.822056   24245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 18:52:27.322417   24245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 18:52:27.336724   24245 api_server.go:72] duration metric: took 1.015263908s to wait for apiserver process to appear ...
	I1006 18:52:27.336737   24245 api_server.go:88] waiting for apiserver healthz status ...
	I1006 18:52:27.336754   24245 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1006 18:52:31.232098   24245 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1006 18:52:31.232115   24245 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1006 18:52:31.232127   24245 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1006 18:52:31.241132   24245 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1006 18:52:31.241147   24245 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1006 18:52:31.337376   24245 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1006 18:52:31.356424   24245 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1006 18:52:31.356460   24245 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1006 18:52:31.837175   24245 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1006 18:52:31.855566   24245 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1006 18:52:31.855592   24245 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1006 18:52:32.337729   24245 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1006 18:52:32.346451   24245 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1006 18:52:32.360941   24245 api_server.go:141] control plane version: v1.34.1
	I1006 18:52:32.360958   24245 api_server.go:131] duration metric: took 5.024215417s to wait for apiserver health ...
	I1006 18:52:32.360966   24245 cni.go:84] Creating CNI manager for ""
	I1006 18:52:32.360971   24245 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 18:52:32.364232   24245 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1006 18:52:32.367376   24245 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1006 18:52:32.372664   24245 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1006 18:52:32.372675   24245 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1006 18:52:32.398317   24245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1006 18:52:32.829430   24245 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 18:52:32.832878   24245 system_pods.go:59] 8 kube-system pods found
	I1006 18:52:32.832900   24245 system_pods.go:61] "coredns-66bc5c9577-gvvqf" [ddb2c8e8-2ddc-4c9d-9fee-65fb361e4731] Running
	I1006 18:52:32.832909   24245 system_pods.go:61] "etcd-functional-184058" [c43eb306-0fdb-49ff-8191-af48f3a679f9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 18:52:32.832915   24245 system_pods.go:61] "kindnet-ms7pw" [f4b6c6f9-fe5a-4f72-8cf6-cff3c660d74e] Running
	I1006 18:52:32.832922   24245 system_pods.go:61] "kube-apiserver-functional-184058" [641afdbe-0199-4adb-928a-46d0416c219d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 18:52:32.832927   24245 system_pods.go:61] "kube-controller-manager-functional-184058" [54ecc3f0-b7b8-42ca-b84f-c371e323b4e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 18:52:32.832932   24245 system_pods.go:61] "kube-proxy-7skbp" [2cbba388-8a0f-477e-98f0-33c185859aa2] Running
	I1006 18:52:32.832937   24245 system_pods.go:61] "kube-scheduler-functional-184058" [9e2596ee-c128-49b7-9464-21e295970cac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 18:52:32.832940   24245 system_pods.go:61] "storage-provisioner" [ecf3f65b-ecc0-41fe-911f-ecc7eba033b7] Running
	I1006 18:52:32.832946   24245 system_pods.go:74] duration metric: took 3.505242ms to wait for pod list to return data ...
	I1006 18:52:32.832952   24245 node_conditions.go:102] verifying NodePressure condition ...
	I1006 18:52:32.835735   24245 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 18:52:32.835757   24245 node_conditions.go:123] node cpu capacity is 2
	I1006 18:52:32.835768   24245 node_conditions.go:105] duration metric: took 2.811755ms to run NodePressure ...
	I1006 18:52:32.835825   24245 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 18:52:33.085820   24245 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1006 18:52:33.089384   24245 kubeadm.go:743] kubelet initialised
	I1006 18:52:33.089395   24245 kubeadm.go:744] duration metric: took 3.562193ms waiting for restarted kubelet to initialise ...
	I1006 18:52:33.089409   24245 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1006 18:52:33.098687   24245 ops.go:34] apiserver oom_adj: -16
	I1006 18:52:33.098700   24245 kubeadm.go:601] duration metric: took 21.501545512s to restartPrimaryControlPlane
	I1006 18:52:33.098708   24245 kubeadm.go:402] duration metric: took 21.672349606s to StartCluster
	I1006 18:52:33.098722   24245 settings.go:142] acquiring lock: {Name:mkaade96c66593c653256adaeb0a3029ca60e0c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:52:33.098779   24245 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 18:52:33.099414   24245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/kubeconfig: {Name:mkf8cce8f9dace5c4d41967d6ad8df5e03ba53f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 18:52:33.099626   24245 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 18:52:33.099905   24245 config.go:182] Loaded profile config "functional-184058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 18:52:33.099937   24245 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 18:52:33.099987   24245 addons.go:69] Setting storage-provisioner=true in profile "functional-184058"
	I1006 18:52:33.100000   24245 addons.go:238] Setting addon storage-provisioner=true in "functional-184058"
	W1006 18:52:33.100005   24245 addons.go:247] addon storage-provisioner should already be in state true
	I1006 18:52:33.100022   24245 host.go:66] Checking if "functional-184058" exists ...
	I1006 18:52:33.100041   24245 addons.go:69] Setting default-storageclass=true in profile "functional-184058"
	I1006 18:52:33.100057   24245 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-184058"
	I1006 18:52:33.100355   24245 cli_runner.go:164] Run: docker container inspect functional-184058 --format={{.State.Status}}
	I1006 18:52:33.100425   24245 cli_runner.go:164] Run: docker container inspect functional-184058 --format={{.State.Status}}
	I1006 18:52:33.103860   24245 out.go:179] * Verifying Kubernetes components...
	I1006 18:52:33.106915   24245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 18:52:33.135790   24245 addons.go:238] Setting addon default-storageclass=true in "functional-184058"
	W1006 18:52:33.135802   24245 addons.go:247] addon default-storageclass should already be in state true
	I1006 18:52:33.135834   24245 host.go:66] Checking if "functional-184058" exists ...
	I1006 18:52:33.136281   24245 cli_runner.go:164] Run: docker container inspect functional-184058 --format={{.State.Status}}
	I1006 18:52:33.137746   24245 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 18:52:33.140552   24245 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 18:52:33.140562   24245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 18:52:33.140628   24245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-184058
	I1006 18:52:33.169023   24245 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 18:52:33.169036   24245 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 18:52:33.169101   24245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-184058
	I1006 18:52:33.181306   24245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/functional-184058/id_rsa Username:docker}
	I1006 18:52:33.217730   24245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/functional-184058/id_rsa Username:docker}
	I1006 18:52:33.321845   24245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 18:52:33.332671   24245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 18:52:33.372610   24245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 18:52:34.144619   24245 node_ready.go:35] waiting up to 6m0s for node "functional-184058" to be "Ready" ...
	I1006 18:52:34.147480   24245 node_ready.go:49] node "functional-184058" is "Ready"
	I1006 18:52:34.147497   24245 node_ready.go:38] duration metric: took 2.850302ms for node "functional-184058" to be "Ready" ...
	I1006 18:52:34.147509   24245 api_server.go:52] waiting for apiserver process to appear ...
	I1006 18:52:34.147567   24245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 18:52:34.156156   24245 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1006 18:52:34.159083   24245 addons.go:514] duration metric: took 1.059134006s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1006 18:52:34.162318   24245 api_server.go:72] duration metric: took 1.062666841s to wait for apiserver process to appear ...
	I1006 18:52:34.162330   24245 api_server.go:88] waiting for apiserver healthz status ...
	I1006 18:52:34.162346   24245 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1006 18:52:34.172055   24245 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1006 18:52:34.172933   24245 api_server.go:141] control plane version: v1.34.1
	I1006 18:52:34.172946   24245 api_server.go:131] duration metric: took 10.611775ms to wait for apiserver health ...
	I1006 18:52:34.172954   24245 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 18:52:34.175690   24245 system_pods.go:59] 8 kube-system pods found
	I1006 18:52:34.175734   24245 system_pods.go:61] "coredns-66bc5c9577-gvvqf" [ddb2c8e8-2ddc-4c9d-9fee-65fb361e4731] Running
	I1006 18:52:34.175743   24245 system_pods.go:61] "etcd-functional-184058" [c43eb306-0fdb-49ff-8191-af48f3a679f9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 18:52:34.175747   24245 system_pods.go:61] "kindnet-ms7pw" [f4b6c6f9-fe5a-4f72-8cf6-cff3c660d74e] Running
	I1006 18:52:34.175754   24245 system_pods.go:61] "kube-apiserver-functional-184058" [641afdbe-0199-4adb-928a-46d0416c219d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 18:52:34.175763   24245 system_pods.go:61] "kube-controller-manager-functional-184058" [54ecc3f0-b7b8-42ca-b84f-c371e323b4e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 18:52:34.175767   24245 system_pods.go:61] "kube-proxy-7skbp" [2cbba388-8a0f-477e-98f0-33c185859aa2] Running
	I1006 18:52:34.175774   24245 system_pods.go:61] "kube-scheduler-functional-184058" [9e2596ee-c128-49b7-9464-21e295970cac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 18:52:34.175777   24245 system_pods.go:61] "storage-provisioner" [ecf3f65b-ecc0-41fe-911f-ecc7eba033b7] Running
	I1006 18:52:34.175782   24245 system_pods.go:74] duration metric: took 2.822931ms to wait for pod list to return data ...
	I1006 18:52:34.175788   24245 default_sa.go:34] waiting for default service account to be created ...
	I1006 18:52:34.177922   24245 default_sa.go:45] found service account: "default"
	I1006 18:52:34.177932   24245 default_sa.go:55] duration metric: took 2.140677ms for default service account to be created ...
	I1006 18:52:34.177939   24245 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 18:52:34.180883   24245 system_pods.go:86] 8 kube-system pods found
	I1006 18:52:34.180897   24245 system_pods.go:89] "coredns-66bc5c9577-gvvqf" [ddb2c8e8-2ddc-4c9d-9fee-65fb361e4731] Running
	I1006 18:52:34.180905   24245 system_pods.go:89] "etcd-functional-184058" [c43eb306-0fdb-49ff-8191-af48f3a679f9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 18:52:34.180909   24245 system_pods.go:89] "kindnet-ms7pw" [f4b6c6f9-fe5a-4f72-8cf6-cff3c660d74e] Running
	I1006 18:52:34.180915   24245 system_pods.go:89] "kube-apiserver-functional-184058" [641afdbe-0199-4adb-928a-46d0416c219d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 18:52:34.180921   24245 system_pods.go:89] "kube-controller-manager-functional-184058" [54ecc3f0-b7b8-42ca-b84f-c371e323b4e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 18:52:34.180924   24245 system_pods.go:89] "kube-proxy-7skbp" [2cbba388-8a0f-477e-98f0-33c185859aa2] Running
	I1006 18:52:34.180928   24245 system_pods.go:89] "kube-scheduler-functional-184058" [9e2596ee-c128-49b7-9464-21e295970cac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 18:52:34.180931   24245 system_pods.go:89] "storage-provisioner" [ecf3f65b-ecc0-41fe-911f-ecc7eba033b7] Running
	I1006 18:52:34.180938   24245 system_pods.go:126] duration metric: took 2.995113ms to wait for k8s-apps to be running ...
	I1006 18:52:34.180945   24245 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 18:52:34.181001   24245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 18:52:34.196898   24245 system_svc.go:56] duration metric: took 15.943131ms WaitForService to wait for kubelet
	I1006 18:52:34.196916   24245 kubeadm.go:586] duration metric: took 1.097270017s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 18:52:34.196932   24245 node_conditions.go:102] verifying NodePressure condition ...
	I1006 18:52:34.201179   24245 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 18:52:34.201195   24245 node_conditions.go:123] node cpu capacity is 2
	I1006 18:52:34.201204   24245 node_conditions.go:105] duration metric: took 4.263016ms to run NodePressure ...
	I1006 18:52:34.201215   24245 start.go:241] waiting for startup goroutines ...
	I1006 18:52:34.201222   24245 start.go:246] waiting for cluster config update ...
	I1006 18:52:34.201232   24245 start.go:255] writing updated cluster config ...
	I1006 18:52:34.201515   24245 ssh_runner.go:195] Run: rm -f paused
	I1006 18:52:34.205277   24245 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 18:52:34.208610   24245 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gvvqf" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:52:34.214412   24245 pod_ready.go:94] pod "coredns-66bc5c9577-gvvqf" is "Ready"
	I1006 18:52:34.214426   24245 pod_ready.go:86] duration metric: took 5.804111ms for pod "coredns-66bc5c9577-gvvqf" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:52:34.216751   24245 pod_ready.go:83] waiting for pod "etcd-functional-184058" in "kube-system" namespace to be "Ready" or be gone ...
	W1006 18:52:36.222893   24245 pod_ready.go:104] pod "etcd-functional-184058" is not "Ready", error: <nil>
	W1006 18:52:38.722898   24245 pod_ready.go:104] pod "etcd-functional-184058" is not "Ready", error: <nil>
	W1006 18:52:41.221736   24245 pod_ready.go:104] pod "etcd-functional-184058" is not "Ready", error: <nil>
	W1006 18:52:43.221904   24245 pod_ready.go:104] pod "etcd-functional-184058" is not "Ready", error: <nil>
	W1006 18:52:45.224766   24245 pod_ready.go:104] pod "etcd-functional-184058" is not "Ready", error: <nil>
	I1006 18:52:46.222854   24245 pod_ready.go:94] pod "etcd-functional-184058" is "Ready"
	I1006 18:52:46.222876   24245 pod_ready.go:86] duration metric: took 12.006106216s for pod "etcd-functional-184058" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:52:46.225420   24245 pod_ready.go:83] waiting for pod "kube-apiserver-functional-184058" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:52:46.230224   24245 pod_ready.go:94] pod "kube-apiserver-functional-184058" is "Ready"
	I1006 18:52:46.230238   24245 pod_ready.go:86] duration metric: took 4.804936ms for pod "kube-apiserver-functional-184058" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:52:46.232543   24245 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-184058" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:52:46.237210   24245 pod_ready.go:94] pod "kube-controller-manager-functional-184058" is "Ready"
	I1006 18:52:46.237224   24245 pod_ready.go:86] duration metric: took 4.66916ms for pod "kube-controller-manager-functional-184058" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:52:46.239627   24245 pod_ready.go:83] waiting for pod "kube-proxy-7skbp" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:52:46.420980   24245 pod_ready.go:94] pod "kube-proxy-7skbp" is "Ready"
	I1006 18:52:46.420994   24245 pod_ready.go:86] duration metric: took 181.354585ms for pod "kube-proxy-7skbp" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:52:46.621139   24245 pod_ready.go:83] waiting for pod "kube-scheduler-functional-184058" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:52:47.021055   24245 pod_ready.go:94] pod "kube-scheduler-functional-184058" is "Ready"
	I1006 18:52:47.021069   24245 pod_ready.go:86] duration metric: took 399.916803ms for pod "kube-scheduler-functional-184058" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 18:52:47.021079   24245 pod_ready.go:40] duration metric: took 12.815782029s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 18:52:47.080784   24245 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1006 18:52:47.084063   24245 out.go:179] * Done! kubectl is now configured to use "functional-184058" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 06 18:53:22 functional-184058 crio[3530]: time="2025-10-06T18:53:22.397127858Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-xg6nq Namespace:default ID:a0a06255680f3f88bd64f385b3f14326d2bc391e2ebdfbd0671e4233a01e3656 UID:a745d6b1-9cdf-4b5b-84c1-2d5bd4b754d9 NetNS:/var/run/netns/19accf1c-8f6a-4940-9c94-156b6b8b3da8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000d795d8}] Aliases:map[]}"
	Oct 06 18:53:22 functional-184058 crio[3530]: time="2025-10-06T18:53:22.397450061Z" level=info msg="Checking pod default_hello-node-75c85bcc94-xg6nq for CNI network kindnet (type=ptp)"
	Oct 06 18:53:22 functional-184058 crio[3530]: time="2025-10-06T18:53:22.4008003Z" level=info msg="Ran pod sandbox a0a06255680f3f88bd64f385b3f14326d2bc391e2ebdfbd0671e4233a01e3656 with infra container: default/hello-node-75c85bcc94-xg6nq/POD" id=136c9395-6b7d-4765-8b10-ededc7948c33 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 18:53:22 functional-184058 crio[3530]: time="2025-10-06T18:53:22.405108403Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=65b4334a-7385-40a0-b48f-eb28e124460d name=/runtime.v1.ImageService/PullImage
	Oct 06 18:53:26 functional-184058 crio[3530]: time="2025-10-06T18:53:26.390326314Z" level=info msg="Stopping pod sandbox: f33c6ee7b8651c824159fd07b648ece3c975df35b04ec73133e21d3992609a2d" id=eb07c118-b7f3-46f0-9f4d-c9bed39cad46 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 06 18:53:26 functional-184058 crio[3530]: time="2025-10-06T18:53:26.390380172Z" level=info msg="Stopped pod sandbox (already stopped): f33c6ee7b8651c824159fd07b648ece3c975df35b04ec73133e21d3992609a2d" id=eb07c118-b7f3-46f0-9f4d-c9bed39cad46 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 06 18:53:26 functional-184058 crio[3530]: time="2025-10-06T18:53:26.391117452Z" level=info msg="Removing pod sandbox: f33c6ee7b8651c824159fd07b648ece3c975df35b04ec73133e21d3992609a2d" id=7a005749-25bc-4466-b5b9-45cec59348d1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 06 18:53:26 functional-184058 crio[3530]: time="2025-10-06T18:53:26.394829936Z" level=info msg="Removed pod sandbox: f33c6ee7b8651c824159fd07b648ece3c975df35b04ec73133e21d3992609a2d" id=7a005749-25bc-4466-b5b9-45cec59348d1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 06 18:53:26 functional-184058 crio[3530]: time="2025-10-06T18:53:26.395467164Z" level=info msg="Stopping pod sandbox: 308f9ed3c510d8cc3698e31d875f3dac0db53d225c9a2080499a9f712c9196b5" id=cbe373c9-0c87-483f-adf8-6fb3cb092f22 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 06 18:53:26 functional-184058 crio[3530]: time="2025-10-06T18:53:26.395508165Z" level=info msg="Stopped pod sandbox (already stopped): 308f9ed3c510d8cc3698e31d875f3dac0db53d225c9a2080499a9f712c9196b5" id=cbe373c9-0c87-483f-adf8-6fb3cb092f22 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 06 18:53:26 functional-184058 crio[3530]: time="2025-10-06T18:53:26.39608045Z" level=info msg="Removing pod sandbox: 308f9ed3c510d8cc3698e31d875f3dac0db53d225c9a2080499a9f712c9196b5" id=a1da7e72-5b72-440f-96f6-e6a1b51372b2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 06 18:53:26 functional-184058 crio[3530]: time="2025-10-06T18:53:26.399454238Z" level=info msg="Removed pod sandbox: 308f9ed3c510d8cc3698e31d875f3dac0db53d225c9a2080499a9f712c9196b5" id=a1da7e72-5b72-440f-96f6-e6a1b51372b2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 06 18:53:26 functional-184058 crio[3530]: time="2025-10-06T18:53:26.40006283Z" level=info msg="Stopping pod sandbox: 7e5da4ecaaa172825baab25f00f8fb2bfe2f2e95b4b52437abd9c85bccad9dce" id=38eece7f-57cc-4fb3-ae71-57d64484878c name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 06 18:53:26 functional-184058 crio[3530]: time="2025-10-06T18:53:26.400111847Z" level=info msg="Stopped pod sandbox (already stopped): 7e5da4ecaaa172825baab25f00f8fb2bfe2f2e95b4b52437abd9c85bccad9dce" id=38eece7f-57cc-4fb3-ae71-57d64484878c name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 06 18:53:26 functional-184058 crio[3530]: time="2025-10-06T18:53:26.400423942Z" level=info msg="Removing pod sandbox: 7e5da4ecaaa172825baab25f00f8fb2bfe2f2e95b4b52437abd9c85bccad9dce" id=30fa17f4-2e58-49c7-8de6-74afa6bca4aa name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 06 18:53:26 functional-184058 crio[3530]: time="2025-10-06T18:53:26.404129124Z" level=info msg="Removed pod sandbox: 7e5da4ecaaa172825baab25f00f8fb2bfe2f2e95b4b52437abd9c85bccad9dce" id=30fa17f4-2e58-49c7-8de6-74afa6bca4aa name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 06 18:53:38 functional-184058 crio[3530]: time="2025-10-06T18:53:38.375288456Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1c2a2289-ddcd-49cd-ad31-f0fbdc062281 name=/runtime.v1.ImageService/PullImage
	Oct 06 18:53:45 functional-184058 crio[3530]: time="2025-10-06T18:53:45.37525217Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ded4aec4-f8b7-49d3-8fa2-b86945fcd6f3 name=/runtime.v1.ImageService/PullImage
	Oct 06 18:54:04 functional-184058 crio[3530]: time="2025-10-06T18:54:04.375479143Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5b9655e9-f9ab-4a52-9760-bc3fa75a5a9d name=/runtime.v1.ImageService/PullImage
	Oct 06 18:54:38 functional-184058 crio[3530]: time="2025-10-06T18:54:38.376090389Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=9bc2149d-27b8-400b-b49b-d7875ef9e521 name=/runtime.v1.ImageService/PullImage
	Oct 06 18:54:57 functional-184058 crio[3530]: time="2025-10-06T18:54:57.374836096Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=597d0413-d218-45d6-9b69-f401e7a04092 name=/runtime.v1.ImageService/PullImage
	Oct 06 18:56:07 functional-184058 crio[3530]: time="2025-10-06T18:56:07.374355858Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=bf72a57b-17c4-49e5-b476-d1bb42f35673 name=/runtime.v1.ImageService/PullImage
	Oct 06 18:56:25 functional-184058 crio[3530]: time="2025-10-06T18:56:25.375390844Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=60b18ede-ab60-46d1-8637-4e602d0d6850 name=/runtime.v1.ImageService/PullImage
	Oct 06 18:58:54 functional-184058 crio[3530]: time="2025-10-06T18:58:54.374348487Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=575f27cc-2f5a-4d49-ad31-a5880275cc71 name=/runtime.v1.ImageService/PullImage
	Oct 06 18:59:14 functional-184058 crio[3530]: time="2025-10-06T18:59:14.374591749Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f7a5f2eb-3699-4616-8099-13586abe7e5f name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f9d7a71fffbf3       docker.io/library/nginx@sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992   9 minutes ago       Running             myfrontend                0                   25d4f3fd0a89a       sp-pod                                      default
	038672db07896       docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac   10 minutes ago      Running             nginx                     0                   a82d889d3900f       nginx-svc                                   default
	2dd5e5f3f6e29       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                3                   8f6cb7301f1ac       kube-proxy-7skbp                            kube-system
	21c0df7333d4c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       3                   08d393d1f001d       storage-provisioner                         kube-system
	e1c6c78af7616       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               3                   ff4ccfa766cea       kindnet-ms7pw                               kube-system
	af151c41b9eea       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   9d243f78e2459       kube-apiserver-functional-184058            kube-system
	4e79f38c41425       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   3                   836e1ff5f11a7       kube-controller-manager-functional-184058   kube-system
	3cef3c9f7c727       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            3                   9220566b6da30       kube-scheduler-functional-184058            kube-system
	a980c6d7259af       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      3                   68ed6d05ae6ed       etcd-functional-184058                      kube-system
	87fd9f546b0b0       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   41fd58c358d09       coredns-66bc5c9577-gvvqf                    kube-system
	b4dce4fed2ae7       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Exited              kube-controller-manager   2                   836e1ff5f11a7       kube-controller-manager-functional-184058   kube-system
	672d3554a562e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Exited              storage-provisioner       2                   08d393d1f001d       storage-provisioner                         kube-system
	dfd3ea9d7c739       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Exited              kube-proxy                2                   8f6cb7301f1ac       kube-proxy-7skbp                            kube-system
	321abf76c5e51       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Exited              etcd                      2                   68ed6d05ae6ed       etcd-functional-184058                      kube-system
	3927de176f35e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Exited              kindnet-cni               2                   ff4ccfa766cea       kindnet-ms7pw                               kube-system
	9422504570c94       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Exited              kube-scheduler            2                   9220566b6da30       kube-scheduler-functional-184058            kube-system
	855662367b258       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   41fd58c358d09       coredns-66bc5c9577-gvvqf                    kube-system
	
	
	==> coredns [855662367b2584f2b2f6f0a9996df014844358f3c20380a0a7bec2e6e6284d3c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41170 - 40596 "HINFO IN 550955174148989364.8839312013360758187. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.022525758s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [87fd9f546b0b01c688c9ea99795e9da4a0b627db44233ec78609d10c86f54ba6] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40211 - 56895 "HINFO IN 7270692443019109067.5651328159032943898. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016731907s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               functional-184058
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-184058
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81
	                    minikube.k8s.io/name=functional-184058
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_06T18_50_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 06 Oct 2025 18:50:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-184058
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 06 Oct 2025 19:03:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 06 Oct 2025 19:01:32 +0000   Mon, 06 Oct 2025 18:50:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 06 Oct 2025 19:01:32 +0000   Mon, 06 Oct 2025 18:50:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 06 Oct 2025 19:01:32 +0000   Mon, 06 Oct 2025 18:50:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 06 Oct 2025 19:01:32 +0000   Mon, 06 Oct 2025 18:51:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-184058
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 18942d70d29d49159cb1fe8bd70bc6bb
	  System UUID:                c2e64772-d27b-4305-a975-95fd5157be44
	  Boot ID:                    28123a58-a718-41b5-bac7-da83ff2d3a0d
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-xg6nq                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	  default                     hello-node-connect-7d85dfc575-4mdj6          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	  kube-system                 coredns-66bc5c9577-gvvqf                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-184058                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-ms7pw                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-184058             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-184058    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-7skbp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-184058             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-184058 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-184058 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-184058 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-184058 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-184058 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-184058 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-184058 event: Registered Node functional-184058 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-184058 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-184058 event: Registered Node functional-184058 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-184058 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-184058 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-184058 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-184058 event: Registered Node functional-184058 in Controller
	
	
	==> dmesg <==
	[Oct 6 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015541] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.518273] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033731] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.758438] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.412532] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 6 18:43] overlayfs: idmapped layers are currently not supported
	[  +0.067491] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 6 18:49] overlayfs: idmapped layers are currently not supported
	[Oct 6 18:50] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [321abf76c5e51ee89f16ce91b6586ef4ed13171d1a0b3a44af3eec05cfde6e07] <==
	{"level":"warn","ts":"2025-10-06T18:52:13.845778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:52:13.858415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:52:13.894045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:52:13.936320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:52:13.955017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:52:13.979302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:52:14.051462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56286","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-06T18:52:22.964141Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-06T18:52:22.964186Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-184058","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-06T18:52:22.964315Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-06T18:52:22.965839Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-06T18:52:22.965939Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-06T18:52:22.965962Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-06T18:52:22.966021Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-06T18:52:22.966008Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-06T18:52:22.966043Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-06T18:52:22.966052Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-06T18:52:22.966032Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-06T18:52:22.966082Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-06T18:52:22.966093Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-06T18:52:22.966105Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-06T18:52:22.970256Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-06T18:52:22.970344Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-06T18:52:22.970374Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-06T18:52:22.970389Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-184058","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [a980c6d7259afde6edbc758e52609e96a78efcf1a4162467eff0105cc1f3276d] <==
	{"level":"warn","ts":"2025-10-06T18:52:29.909741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:52:29.922383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:52:29.945170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:52:29.960137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:52:29.981549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:52:29.992892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:52:30.015605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:52:30.032357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:52:30.060316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:52:30.077145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:52:30.146467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:52:30.180042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:52:30.206785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:52:30.232158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:52:30.252561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:52:30.271778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:52:30.293947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:52:30.311209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:52:30.349474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:52:30.381038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:52:30.402927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T18:52:30.459784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59572","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-06T19:02:28.871757Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1137}
	{"level":"info","ts":"2025-10-06T19:02:28.895605Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1137,"took":"23.525018ms","hash":2840489926,"current-db-size-bytes":3239936,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1429504,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-10-06T19:02:28.895666Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2840489926,"revision":1137,"compact-revision":-1}
	
	
	==> kernel <==
	 19:03:08 up 45 min,  0 user,  load average: 0.27, 0.45, 0.58
	Linux functional-184058 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3927de176f35eafa33a9375fc86411e70971d2402f573001b2ee67b0662f0b5f] <==
	I1006 18:52:11.362243       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1006 18:52:11.362728       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1006 18:52:11.362856       1 main.go:148] setting mtu 1500 for CNI 
	I1006 18:52:11.362868       1 main.go:178] kindnetd IP family: "ipv4"
	I1006 18:52:11.362882       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-06T18:52:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	E1006 18:52:11.650441       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1006 18:52:11.651048       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1006 18:52:11.651124       1 controller.go:381] "Waiting for informer caches to sync"
	I1006 18:52:11.651616       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1006 18:52:11.656004       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1006 18:52:11.656165       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1006 18:52:11.656260       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1006 18:52:11.656628       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1006 18:52:15.061556       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1006 18:52:15.061596       1 metrics.go:72] Registering metrics
	I1006 18:52:15.061692       1 controller.go:711] "Syncing nftables rules"
	I1006 18:52:21.582454       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 18:52:21.582517       1 main.go:301] handling current node
	
	
	==> kindnet [e1c6c78af761699c8099c3a8614e6566a267fcaef522842631aea3a92907d7a8] <==
	I1006 19:01:02.047239       1 main.go:301] handling current node
	I1006 19:01:12.051845       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 19:01:12.051882       1 main.go:301] handling current node
	I1006 19:01:22.047386       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 19:01:22.047422       1 main.go:301] handling current node
	I1006 19:01:32.047199       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 19:01:32.047326       1 main.go:301] handling current node
	I1006 19:01:42.053212       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 19:01:42.053261       1 main.go:301] handling current node
	I1006 19:01:52.055195       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 19:01:52.055230       1 main.go:301] handling current node
	I1006 19:02:02.055087       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 19:02:02.055123       1 main.go:301] handling current node
	I1006 19:02:12.051128       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 19:02:12.051174       1 main.go:301] handling current node
	I1006 19:02:22.052069       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 19:02:22.052169       1 main.go:301] handling current node
	I1006 19:02:32.047629       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 19:02:32.047666       1 main.go:301] handling current node
	I1006 19:02:42.052174       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 19:02:42.052297       1 main.go:301] handling current node
	I1006 19:02:52.047446       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 19:02:52.047490       1 main.go:301] handling current node
	I1006 19:03:02.053249       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1006 19:03:02.053285       1 main.go:301] handling current node
	
	
	==> kube-apiserver [af151c41b9eeac5b3823ab91f9c11af5e2d4df931ab8d2522dc5be28cc9c547c] <==
	I1006 18:52:31.393458       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1006 18:52:31.393407       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1006 18:52:31.393420       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1006 18:52:31.393430       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1006 18:52:31.402212       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1006 18:52:31.402348       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1006 18:52:31.419107       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1006 18:52:31.420366       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1006 18:52:32.094799       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1006 18:52:32.361670       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1006 18:52:32.363772       1 controller.go:667] quota admission added evaluator for: endpoints
	I1006 18:52:32.371210       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1006 18:52:32.821749       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1006 18:52:32.937191       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1006 18:52:33.007256       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1006 18:52:33.018544       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1006 18:52:38.685152       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1006 18:52:50.419862       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.213.12"}
	I1006 18:52:56.884696       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.104.118.186"}
	I1006 18:53:06.571029       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.108.7"}
	E1006 18:53:13.798215       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:41592: use of closed network connection
	E1006 18:53:14.613659       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1006 18:53:21.920597       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:53646: use of closed network connection
	I1006 18:53:22.134400       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.170.90"}
	I1006 19:02:31.316432       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [4e79f38c414254fa2d247dccd4b6fcb051d203680d8a9184193c237ceea11d70] <==
	I1006 18:52:34.688179       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1006 18:52:34.691439       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 18:52:34.691461       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1006 18:52:34.691470       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1006 18:52:34.691534       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1006 18:52:34.691565       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1006 18:52:34.692554       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1006 18:52:34.693346       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1006 18:52:34.693395       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1006 18:52:34.693474       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1006 18:52:34.693536       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1006 18:52:34.693603       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-184058"
	I1006 18:52:34.693643       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1006 18:52:34.693688       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1006 18:52:34.693667       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1006 18:52:34.694891       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1006 18:52:34.697626       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1006 18:52:34.701935       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1006 18:52:34.703526       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1006 18:52:34.703601       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1006 18:52:34.705762       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1006 18:52:34.710111       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1006 18:52:34.713202       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1006 18:52:34.733382       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 18:52:34.740533       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	
	
	==> kube-controller-manager [b4dce4fed2ae7530c46ef1d8aedaa3356d624c4df65a16b32d3b27fa33003ea6] <==
	
	
	==> kube-proxy [2dd5e5f3f6e29d83a20eb22953a6b109ea4ce2a7584212e77c2377507fdaaddd] <==
	I1006 18:52:31.864209       1 server_linux.go:53] "Using iptables proxy"
	I1006 18:52:31.949454       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1006 18:52:32.055851       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1006 18:52:32.063469       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1006 18:52:32.063564       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1006 18:52:32.161886       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 18:52:32.161945       1 server_linux.go:132] "Using iptables Proxier"
	I1006 18:52:32.174371       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1006 18:52:32.174770       1 server.go:527] "Version info" version="v1.34.1"
	I1006 18:52:32.174795       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 18:52:32.179876       1 config.go:200] "Starting service config controller"
	I1006 18:52:32.179964       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1006 18:52:32.180008       1 config.go:106] "Starting endpoint slice config controller"
	I1006 18:52:32.180049       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1006 18:52:32.180123       1 config.go:403] "Starting serviceCIDR config controller"
	I1006 18:52:32.180156       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1006 18:52:32.180825       1 config.go:309] "Starting node config controller"
	I1006 18:52:32.180890       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1006 18:52:32.180921       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1006 18:52:32.280471       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1006 18:52:32.280573       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1006 18:52:32.280586       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [dfd3ea9d7c73990b01a6a03b947b558e6c80862be3dec9edb3a425476d4e0190] <==
	
	
	==> kube-scheduler [3cef3c9f7c7277a827854ce373013c6170010fa9e6ea56efcf43e918e835ac85] <==
	I1006 18:52:27.698101       1 serving.go:386] Generated self-signed cert in-memory
	I1006 18:52:31.388662       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1006 18:52:31.388776       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 18:52:31.405517       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1006 18:52:31.405646       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1006 18:52:31.411513       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1006 18:52:31.405694       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 18:52:31.411651       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 18:52:31.405709       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1006 18:52:31.405678       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 18:52:31.412187       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 18:52:31.512166       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 18:52:31.512378       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1006 18:52:31.513182       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [9422504570c94447b34c12d649d33cc5d0891b9e3883159aace433341cb1be22] <==
	I1006 18:52:12.955981       1 serving.go:386] Generated self-signed cert in-memory
	W1006 18:52:14.835756       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1006 18:52:14.835801       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1006 18:52:14.835822       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1006 18:52:14.835829       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1006 18:52:14.927828       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1006 18:52:14.927918       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 18:52:14.934828       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 18:52:14.934912       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 18:52:14.935587       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1006 18:52:14.935653       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1006 18:52:15.037068       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 18:52:23.199392       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1006 18:52:23.199420       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1006 18:52:23.199452       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1006 18:52:23.199538       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 18:52:23.199565       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1006 18:52:23.199613       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 06 19:00:33 functional-184058 kubelet[4062]: E1006 19:00:33.374533    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xg6nq" podUID="a745d6b1-9cdf-4b5b-84c1-2d5bd4b754d9"
	Oct 06 19:00:40 functional-184058 kubelet[4062]: E1006 19:00:40.375010    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4mdj6" podUID="9f5a428c-21b9-4e37-9764-178633711f58"
	Oct 06 19:00:44 functional-184058 kubelet[4062]: E1006 19:00:44.374609    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xg6nq" podUID="a745d6b1-9cdf-4b5b-84c1-2d5bd4b754d9"
	Oct 06 19:00:54 functional-184058 kubelet[4062]: E1006 19:00:54.373940    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4mdj6" podUID="9f5a428c-21b9-4e37-9764-178633711f58"
	Oct 06 19:00:55 functional-184058 kubelet[4062]: E1006 19:00:55.373670    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xg6nq" podUID="a745d6b1-9cdf-4b5b-84c1-2d5bd4b754d9"
	Oct 06 19:01:09 functional-184058 kubelet[4062]: E1006 19:01:09.374436    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4mdj6" podUID="9f5a428c-21b9-4e37-9764-178633711f58"
	Oct 06 19:01:10 functional-184058 kubelet[4062]: E1006 19:01:10.374639    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xg6nq" podUID="a745d6b1-9cdf-4b5b-84c1-2d5bd4b754d9"
	Oct 06 19:01:23 functional-184058 kubelet[4062]: E1006 19:01:23.374075    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4mdj6" podUID="9f5a428c-21b9-4e37-9764-178633711f58"
	Oct 06 19:01:24 functional-184058 kubelet[4062]: E1006 19:01:24.374235    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xg6nq" podUID="a745d6b1-9cdf-4b5b-84c1-2d5bd4b754d9"
	Oct 06 19:01:35 functional-184058 kubelet[4062]: E1006 19:01:35.374372    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xg6nq" podUID="a745d6b1-9cdf-4b5b-84c1-2d5bd4b754d9"
	Oct 06 19:01:38 functional-184058 kubelet[4062]: E1006 19:01:38.373967    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4mdj6" podUID="9f5a428c-21b9-4e37-9764-178633711f58"
	Oct 06 19:01:47 functional-184058 kubelet[4062]: E1006 19:01:47.374157    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xg6nq" podUID="a745d6b1-9cdf-4b5b-84c1-2d5bd4b754d9"
	Oct 06 19:01:51 functional-184058 kubelet[4062]: E1006 19:01:51.373895    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4mdj6" podUID="9f5a428c-21b9-4e37-9764-178633711f58"
	Oct 06 19:01:58 functional-184058 kubelet[4062]: E1006 19:01:58.374714    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xg6nq" podUID="a745d6b1-9cdf-4b5b-84c1-2d5bd4b754d9"
	Oct 06 19:02:05 functional-184058 kubelet[4062]: E1006 19:02:05.373950    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4mdj6" podUID="9f5a428c-21b9-4e37-9764-178633711f58"
	Oct 06 19:02:13 functional-184058 kubelet[4062]: E1006 19:02:13.373759    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xg6nq" podUID="a745d6b1-9cdf-4b5b-84c1-2d5bd4b754d9"
	Oct 06 19:02:16 functional-184058 kubelet[4062]: E1006 19:02:16.375412    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4mdj6" podUID="9f5a428c-21b9-4e37-9764-178633711f58"
	Oct 06 19:02:26 functional-184058 kubelet[4062]: E1006 19:02:26.374337    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xg6nq" podUID="a745d6b1-9cdf-4b5b-84c1-2d5bd4b754d9"
	Oct 06 19:02:31 functional-184058 kubelet[4062]: E1006 19:02:31.373796    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4mdj6" podUID="9f5a428c-21b9-4e37-9764-178633711f58"
	Oct 06 19:02:38 functional-184058 kubelet[4062]: E1006 19:02:38.374175    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xg6nq" podUID="a745d6b1-9cdf-4b5b-84c1-2d5bd4b754d9"
	Oct 06 19:02:42 functional-184058 kubelet[4062]: E1006 19:02:42.374190    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4mdj6" podUID="9f5a428c-21b9-4e37-9764-178633711f58"
	Oct 06 19:02:49 functional-184058 kubelet[4062]: E1006 19:02:49.374272    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xg6nq" podUID="a745d6b1-9cdf-4b5b-84c1-2d5bd4b754d9"
	Oct 06 19:02:53 functional-184058 kubelet[4062]: E1006 19:02:53.374185    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4mdj6" podUID="9f5a428c-21b9-4e37-9764-178633711f58"
	Oct 06 19:03:04 functional-184058 kubelet[4062]: E1006 19:03:04.374838    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xg6nq" podUID="a745d6b1-9cdf-4b5b-84c1-2d5bd4b754d9"
	Oct 06 19:03:08 functional-184058 kubelet[4062]: E1006 19:03:08.374136    4062 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-4mdj6" podUID="9f5a428c-21b9-4e37-9764-178633711f58"
	
	
	==> storage-provisioner [21c0df7333d4ca894c6de6d0d436edbdfd98d5cc3a9c260120feea30684c4f4e] <==
	W1006 19:02:43.930138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:02:45.933109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:02:45.937603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:02:47.940795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:02:47.945615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:02:49.948305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:02:49.954916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:02:51.957605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:02:51.962026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:02:53.964947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:02:53.969361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:02:55.972382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:02:55.976720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:02:57.979815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:02:57.984364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:02:59.987922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:02:59.994706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:03:01.997744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:03:02.002497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:03:04.005506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:03:04.012136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:03:06.015258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:03:06.020722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:03:08.026435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:03:08.032902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [672d3554a562ecbb977ae6d14c2571cf8b4a2bc4c6adfddfa6477e70cb8ed908] <==
	I1006 18:52:11.809366       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-184058 -n functional-184058
helpers_test.go:269: (dbg) Run:  kubectl --context functional-184058 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-xg6nq hello-node-connect-7d85dfc575-4mdj6
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-184058 describe pod hello-node-75c85bcc94-xg6nq hello-node-connect-7d85dfc575-4mdj6
helpers_test.go:290: (dbg) kubectl --context functional-184058 describe pod hello-node-75c85bcc94-xg6nq hello-node-connect-7d85dfc575-4mdj6:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-xg6nq
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-184058/192.168.49.2
	Start Time:       Mon, 06 Oct 2025 18:53:22 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-87k9f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-87k9f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m47s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-xg6nq to functional-184058
	  Normal   Pulling    6m44s (x5 over 9m47s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m44s (x5 over 9m47s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m44s (x5 over 9m47s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m36s (x21 over 9m47s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m36s (x21 over 9m47s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-4mdj6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-184058/192.168.49.2
	Start Time:       Mon, 06 Oct 2025 18:53:06 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bpp7c (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bpp7c:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  10m                 default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-4mdj6 to functional-184058
	  Normal   Pulling    7m2s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m2s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m2s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    1s (x42 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     1s (x42 over 10m)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-184058 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-184058 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-xg6nq" [a745d6b1-9cdf-4b5b-84c1-2d5bd4b754d9] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1006 18:53:32.316296    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 18:55:48.449378    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 18:56:16.158493    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:00:48.448527    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-184058 -n functional-184058
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-06 19:03:22.572805947 +0000 UTC m=+1312.091873019
functional_test.go:1460: (dbg) Run:  kubectl --context functional-184058 describe po hello-node-75c85bcc94-xg6nq -n default
functional_test.go:1460: (dbg) kubectl --context functional-184058 describe po hello-node-75c85bcc94-xg6nq -n default:
Name:             hello-node-75c85bcc94-xg6nq
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-184058/192.168.49.2
Start Time:       Mon, 06 Oct 2025 18:53:22 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-87k9f (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-87k9f:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-xg6nq to functional-184058
Normal   Pulling    6m57s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m57s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m57s (x5 over 10m)   kubelet            Error: ErrImagePull
Normal   BackOff    4m49s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m49s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-184058 logs hello-node-75c85bcc94-xg6nq -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-184058 logs hello-node-75c85bcc94-xg6nq -n default: exit status 1 (122.935636ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-xg6nq" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-184058 logs hello-node-75c85bcc94-xg6nq -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-184058 service --namespace=default --https --url hello-node: exit status 115 (515.333951ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30934
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-184058 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-184058 service hello-node --url --format={{.IP}}: exit status 115 (462.687901ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-184058 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-184058 service hello-node --url: exit status 115 (463.724951ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30934
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-184058 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30934
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 image load --daemon kicbase/echo-server:functional-184058 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-184058" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 image load --daemon kicbase/echo-server:functional-184058 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-184058 image load --daemon kicbase/echo-server:functional-184058 --alsologtostderr: (3.01511971s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-184058" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-184058
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 image load --daemon kicbase/echo-server:functional-184058 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-184058" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 image save kicbase/echo-server:functional-184058 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1006 19:03:36.204522   32315 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:03:36.204773   32315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:03:36.204812   32315 out.go:374] Setting ErrFile to fd 2...
	I1006 19:03:36.204835   32315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:03:36.205125   32315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:03:36.205783   32315 config.go:182] Loaded profile config "functional-184058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:03:36.205954   32315 config.go:182] Loaded profile config "functional-184058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:03:36.206500   32315 cli_runner.go:164] Run: docker container inspect functional-184058 --format={{.State.Status}}
	I1006 19:03:36.229749   32315 ssh_runner.go:195] Run: systemctl --version
	I1006 19:03:36.229806   32315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-184058
	I1006 19:03:36.250400   32315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/functional-184058/id_rsa Username:docker}
	I1006 19:03:36.346494   32315 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1006 19:03:36.346584   32315 cache_images.go:254] Failed to load cached images for "functional-184058": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1006 19:03:36.346612   32315 cache_images.go:266] failed pushing to: functional-184058

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-184058
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 image save --daemon kicbase/echo-server:functional-184058 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-184058
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-184058: exit status 1 (19.008024ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-184058

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-184058

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.34s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.89s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-692334 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-692334 --output=json --user=testUser: exit status 80 (1.88250358s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b9225d57-9291-4601-bd34-031438282097","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-692334 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"4196c88e-efd0-444f-bbc4-0145e2971bfc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-06T19:17:02Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"08025e80-3c37-4df4-a803-bab414dc6fdf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-692334 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.89s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.55s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-692334 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-692334 --output=json --user=testUser: exit status 80 (1.54672238s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fce4a459-5b60-4569-9ca2-1d72ecaa6429","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-692334 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"e9bc336f-0ef5-4629-ae55-5cc95f8b291b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-06T19:17:04Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"06765938-a061-4273-9946-064ae27ac079","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-692334 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.55s)

                                                
                                    
x
+
TestPause/serial/Pause (6.64s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-719933 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-719933 --alsologtostderr -v=5: exit status 80 (1.882841582s)

                                                
                                                
-- stdout --
	* Pausing node pause-719933 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 19:39:10.898106  166846 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:39:10.898885  166846 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:39:10.898901  166846 out.go:374] Setting ErrFile to fd 2...
	I1006 19:39:10.898907  166846 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:39:10.899258  166846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:39:10.899565  166846 out.go:368] Setting JSON to false
	I1006 19:39:10.899608  166846 mustload.go:65] Loading cluster: pause-719933
	I1006 19:39:10.900134  166846 config.go:182] Loaded profile config "pause-719933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:39:10.900668  166846 cli_runner.go:164] Run: docker container inspect pause-719933 --format={{.State.Status}}
	I1006 19:39:10.917597  166846 host.go:66] Checking if "pause-719933" exists ...
	I1006 19:39:10.917947  166846 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:39:10.973854  166846 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-06 19:39:10.964630673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:39:10.974497  166846 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-719933 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1006 19:39:10.977826  166846 out.go:179] * Pausing node pause-719933 ... 
	I1006 19:39:10.981638  166846 host.go:66] Checking if "pause-719933" exists ...
	I1006 19:39:10.981966  166846 ssh_runner.go:195] Run: systemctl --version
	I1006 19:39:10.982019  166846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-719933
	I1006 19:39:10.998773  166846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33025 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/pause-719933/id_rsa Username:docker}
	I1006 19:39:11.098956  166846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:39:11.114989  166846 pause.go:51] kubelet running: true
	I1006 19:39:11.115064  166846 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1006 19:39:11.407003  166846 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1006 19:39:11.407178  166846 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1006 19:39:11.505901  166846 cri.go:89] found id: "fbfdf2dfedc0e45c2f1e89a8c3423338cddc71d3952ca96617352bba0eeaeb18"
	I1006 19:39:11.505924  166846 cri.go:89] found id: "893d9c10f4bd96fc39fc11cf22e3f42bed5f93fc5a7fc287a9290b2e3ff84dfe"
	I1006 19:39:11.505928  166846 cri.go:89] found id: "961b41de3573fdda3b8ea4db9c54d7e2815089c28a58832b9720d3c4a66d4d91"
	I1006 19:39:11.505933  166846 cri.go:89] found id: "8e8330f950aa7b3752a5fe13d772a7e3fdbbadc6d01761c6972079bdf20a9513"
	I1006 19:39:11.505937  166846 cri.go:89] found id: "aa224c01881ad4743b57a626d9ef90f9c1ac21439aa850feca08f658dd0552d9"
	I1006 19:39:11.505940  166846 cri.go:89] found id: "bd349138ed2daf1aa487424d2bc98d409e705c9241944a3c17768fd46f7f8289"
	I1006 19:39:11.505943  166846 cri.go:89] found id: "b4eb3f8f3e81f8af021cd9dd5ff7fd72c58ef133553cd056ccab41427cc64ece"
	I1006 19:39:11.505946  166846 cri.go:89] found id: "6ccbb47882248567001f98caa3ed6d08c96bd67cf7e41b581b8e20288b7ba653"
	I1006 19:39:11.505949  166846 cri.go:89] found id: "611f7ec82bc68048be0ca0d31606b1d278412943b79fc8a0c4c33da5805164c5"
	I1006 19:39:11.505955  166846 cri.go:89] found id: "4d6e88c27772c35c97d8c35c87ce943957683a84df865f52d99dc5d63f21d8a0"
	I1006 19:39:11.505958  166846 cri.go:89] found id: "70f2389a27f2cceb23c12fdf02125896dba5d458c676409a98715563aad756f6"
	I1006 19:39:11.505961  166846 cri.go:89] found id: "098520989a4ccf81cd5f027022cd81c910e3d341bf7918c27926e6a914e07feb"
	I1006 19:39:11.505964  166846 cri.go:89] found id: "c7989b7481b364d561193295584c5dca811b622af9cf4e24341e42e50a91f5fc"
	I1006 19:39:11.505967  166846 cri.go:89] found id: "dbf88fd449582624356375d9d2477b46834de96fb325ba01fe1444314a62865e"
	I1006 19:39:11.505970  166846 cri.go:89] found id: ""
	I1006 19:39:11.506024  166846 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 19:39:11.517698  166846 retry.go:31] will retry after 332.260619ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:39:11Z" level=error msg="open /run/runc: no such file or directory"
	I1006 19:39:11.850182  166846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:39:11.863415  166846 pause.go:51] kubelet running: false
	I1006 19:39:11.863476  166846 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1006 19:39:12.002950  166846 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1006 19:39:12.003036  166846 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1006 19:39:12.078338  166846 cri.go:89] found id: "fbfdf2dfedc0e45c2f1e89a8c3423338cddc71d3952ca96617352bba0eeaeb18"
	I1006 19:39:12.078363  166846 cri.go:89] found id: "893d9c10f4bd96fc39fc11cf22e3f42bed5f93fc5a7fc287a9290b2e3ff84dfe"
	I1006 19:39:12.078377  166846 cri.go:89] found id: "961b41de3573fdda3b8ea4db9c54d7e2815089c28a58832b9720d3c4a66d4d91"
	I1006 19:39:12.078381  166846 cri.go:89] found id: "8e8330f950aa7b3752a5fe13d772a7e3fdbbadc6d01761c6972079bdf20a9513"
	I1006 19:39:12.078384  166846 cri.go:89] found id: "aa224c01881ad4743b57a626d9ef90f9c1ac21439aa850feca08f658dd0552d9"
	I1006 19:39:12.078388  166846 cri.go:89] found id: "bd349138ed2daf1aa487424d2bc98d409e705c9241944a3c17768fd46f7f8289"
	I1006 19:39:12.078391  166846 cri.go:89] found id: "b4eb3f8f3e81f8af021cd9dd5ff7fd72c58ef133553cd056ccab41427cc64ece"
	I1006 19:39:12.078394  166846 cri.go:89] found id: "6ccbb47882248567001f98caa3ed6d08c96bd67cf7e41b581b8e20288b7ba653"
	I1006 19:39:12.078397  166846 cri.go:89] found id: "611f7ec82bc68048be0ca0d31606b1d278412943b79fc8a0c4c33da5805164c5"
	I1006 19:39:12.078403  166846 cri.go:89] found id: "4d6e88c27772c35c97d8c35c87ce943957683a84df865f52d99dc5d63f21d8a0"
	I1006 19:39:12.078406  166846 cri.go:89] found id: "70f2389a27f2cceb23c12fdf02125896dba5d458c676409a98715563aad756f6"
	I1006 19:39:12.078410  166846 cri.go:89] found id: "098520989a4ccf81cd5f027022cd81c910e3d341bf7918c27926e6a914e07feb"
	I1006 19:39:12.078413  166846 cri.go:89] found id: "c7989b7481b364d561193295584c5dca811b622af9cf4e24341e42e50a91f5fc"
	I1006 19:39:12.078416  166846 cri.go:89] found id: "dbf88fd449582624356375d9d2477b46834de96fb325ba01fe1444314a62865e"
	I1006 19:39:12.078419  166846 cri.go:89] found id: ""
	I1006 19:39:12.078466  166846 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 19:39:12.090502  166846 retry.go:31] will retry after 387.105057ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:39:12Z" level=error msg="open /run/runc: no such file or directory"
	I1006 19:39:12.477899  166846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:39:12.491590  166846 pause.go:51] kubelet running: false
	I1006 19:39:12.491656  166846 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1006 19:39:12.628552  166846 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1006 19:39:12.628639  166846 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1006 19:39:12.698786  166846 cri.go:89] found id: "fbfdf2dfedc0e45c2f1e89a8c3423338cddc71d3952ca96617352bba0eeaeb18"
	I1006 19:39:12.698809  166846 cri.go:89] found id: "893d9c10f4bd96fc39fc11cf22e3f42bed5f93fc5a7fc287a9290b2e3ff84dfe"
	I1006 19:39:12.698814  166846 cri.go:89] found id: "961b41de3573fdda3b8ea4db9c54d7e2815089c28a58832b9720d3c4a66d4d91"
	I1006 19:39:12.698817  166846 cri.go:89] found id: "8e8330f950aa7b3752a5fe13d772a7e3fdbbadc6d01761c6972079bdf20a9513"
	I1006 19:39:12.698822  166846 cri.go:89] found id: "aa224c01881ad4743b57a626d9ef90f9c1ac21439aa850feca08f658dd0552d9"
	I1006 19:39:12.698825  166846 cri.go:89] found id: "bd349138ed2daf1aa487424d2bc98d409e705c9241944a3c17768fd46f7f8289"
	I1006 19:39:12.698829  166846 cri.go:89] found id: "b4eb3f8f3e81f8af021cd9dd5ff7fd72c58ef133553cd056ccab41427cc64ece"
	I1006 19:39:12.698832  166846 cri.go:89] found id: "6ccbb47882248567001f98caa3ed6d08c96bd67cf7e41b581b8e20288b7ba653"
	I1006 19:39:12.698835  166846 cri.go:89] found id: "611f7ec82bc68048be0ca0d31606b1d278412943b79fc8a0c4c33da5805164c5"
	I1006 19:39:12.698841  166846 cri.go:89] found id: "4d6e88c27772c35c97d8c35c87ce943957683a84df865f52d99dc5d63f21d8a0"
	I1006 19:39:12.698844  166846 cri.go:89] found id: "70f2389a27f2cceb23c12fdf02125896dba5d458c676409a98715563aad756f6"
	I1006 19:39:12.698847  166846 cri.go:89] found id: "098520989a4ccf81cd5f027022cd81c910e3d341bf7918c27926e6a914e07feb"
	I1006 19:39:12.698851  166846 cri.go:89] found id: "c7989b7481b364d561193295584c5dca811b622af9cf4e24341e42e50a91f5fc"
	I1006 19:39:12.698857  166846 cri.go:89] found id: "dbf88fd449582624356375d9d2477b46834de96fb325ba01fe1444314a62865e"
	I1006 19:39:12.698860  166846 cri.go:89] found id: ""
	I1006 19:39:12.698912  166846 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 19:39:12.713587  166846 out.go:203] 
	W1006 19:39:12.716632  166846 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:39:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:39:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1006 19:39:12.716656  166846 out.go:285] * 
	* 
	W1006 19:39:12.721422  166846 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 19:39:12.724176  166846 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-719933 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-719933
helpers_test.go:243: (dbg) docker inspect pause-719933:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dc5194c638ea19cae1dff7e756c2b6f31262614e8f03caac134766af5e2a924c",
	        "Created": "2025-10-06T19:37:30.340615797Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 160441,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T19:37:30.429213461Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/dc5194c638ea19cae1dff7e756c2b6f31262614e8f03caac134766af5e2a924c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dc5194c638ea19cae1dff7e756c2b6f31262614e8f03caac134766af5e2a924c/hostname",
	        "HostsPath": "/var/lib/docker/containers/dc5194c638ea19cae1dff7e756c2b6f31262614e8f03caac134766af5e2a924c/hosts",
	        "LogPath": "/var/lib/docker/containers/dc5194c638ea19cae1dff7e756c2b6f31262614e8f03caac134766af5e2a924c/dc5194c638ea19cae1dff7e756c2b6f31262614e8f03caac134766af5e2a924c-json.log",
	        "Name": "/pause-719933",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-719933:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-719933",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dc5194c638ea19cae1dff7e756c2b6f31262614e8f03caac134766af5e2a924c",
	                "LowerDir": "/var/lib/docker/overlay2/46d0e14401bbf0ee17d31407e7cb2fe2d0480435408468baceef517afeda66bb-init/diff:/var/lib/docker/overlay2/950dad8a7486c25883a9845454dded3949e8f3a53166d005e6749ff210bcab80/diff",
	                "MergedDir": "/var/lib/docker/overlay2/46d0e14401bbf0ee17d31407e7cb2fe2d0480435408468baceef517afeda66bb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/46d0e14401bbf0ee17d31407e7cb2fe2d0480435408468baceef517afeda66bb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/46d0e14401bbf0ee17d31407e7cb2fe2d0480435408468baceef517afeda66bb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-719933",
	                "Source": "/var/lib/docker/volumes/pause-719933/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-719933",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-719933",
	                "name.minikube.sigs.k8s.io": "pause-719933",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "17767d519525d8d8d833ab8a04760ca6ae6b3ecba149a595fc109c7aee27cd9b",
	            "SandboxKey": "/var/run/docker/netns/17767d519525",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33025"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33026"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33029"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33027"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33028"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-719933": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:85:6a:6f:89:e6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "304c0013f70a9044a9185cb587a91cc4afec30622dc12a4648005723fcb6eeec",
	                    "EndpointID": "7a24afa7a956a7e7dc9f94f42039ceb715cec0306d8b21e92b20a41cb7181f02",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-719933",
	                        "dc5194c638ea"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-719933 -n pause-719933
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-719933 -n pause-719933: exit status 2 (354.757221ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-719933 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-719933 logs -n 25: (1.423076982s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-262772 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-262772       │ jenkins │ v1.37.0 │ 06 Oct 25 19:33 UTC │ 06 Oct 25 19:34 UTC │
	│ start   │ -p missing-upgrade-911983 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-911983    │ jenkins │ v1.32.0 │ 06 Oct 25 19:33 UTC │ 06 Oct 25 19:34 UTC │
	│ start   │ -p NoKubernetes-262772 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-262772       │ jenkins │ v1.37.0 │ 06 Oct 25 19:34 UTC │ 06 Oct 25 19:34 UTC │
	│ delete  │ -p NoKubernetes-262772                                                                                                                   │ NoKubernetes-262772       │ jenkins │ v1.37.0 │ 06 Oct 25 19:34 UTC │ 06 Oct 25 19:34 UTC │
	│ start   │ -p NoKubernetes-262772 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-262772       │ jenkins │ v1.37.0 │ 06 Oct 25 19:34 UTC │ 06 Oct 25 19:34 UTC │
	│ ssh     │ -p NoKubernetes-262772 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-262772       │ jenkins │ v1.37.0 │ 06 Oct 25 19:34 UTC │                     │
	│ stop    │ -p NoKubernetes-262772                                                                                                                   │ NoKubernetes-262772       │ jenkins │ v1.37.0 │ 06 Oct 25 19:34 UTC │ 06 Oct 25 19:34 UTC │
	│ start   │ -p NoKubernetes-262772 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-262772       │ jenkins │ v1.37.0 │ 06 Oct 25 19:34 UTC │ 06 Oct 25 19:34 UTC │
	│ start   │ -p missing-upgrade-911983 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-911983    │ jenkins │ v1.37.0 │ 06 Oct 25 19:34 UTC │ 06 Oct 25 19:35 UTC │
	│ ssh     │ -p NoKubernetes-262772 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-262772       │ jenkins │ v1.37.0 │ 06 Oct 25 19:34 UTC │                     │
	│ delete  │ -p NoKubernetes-262772                                                                                                                   │ NoKubernetes-262772       │ jenkins │ v1.37.0 │ 06 Oct 25 19:34 UTC │ 06 Oct 25 19:34 UTC │
	│ start   │ -p kubernetes-upgrade-977990 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-977990 │ jenkins │ v1.37.0 │ 06 Oct 25 19:34 UTC │ 06 Oct 25 19:35 UTC │
	│ delete  │ -p missing-upgrade-911983                                                                                                                │ missing-upgrade-911983    │ jenkins │ v1.37.0 │ 06 Oct 25 19:35 UTC │ 06 Oct 25 19:35 UTC │
	│ stop    │ -p kubernetes-upgrade-977990                                                                                                             │ kubernetes-upgrade-977990 │ jenkins │ v1.37.0 │ 06 Oct 25 19:35 UTC │ 06 Oct 25 19:35 UTC │
	│ start   │ -p stopped-upgrade-360545 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-360545    │ jenkins │ v1.32.0 │ 06 Oct 25 19:35 UTC │ 06 Oct 25 19:36 UTC │
	│ start   │ -p kubernetes-upgrade-977990 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-977990 │ jenkins │ v1.37.0 │ 06 Oct 25 19:35 UTC │                     │
	│ stop    │ stopped-upgrade-360545 stop                                                                                                              │ stopped-upgrade-360545    │ jenkins │ v1.32.0 │ 06 Oct 25 19:36 UTC │ 06 Oct 25 19:36 UTC │
	│ start   │ -p stopped-upgrade-360545 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-360545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:36 UTC │ 06 Oct 25 19:36 UTC │
	│ delete  │ -p stopped-upgrade-360545                                                                                                                │ stopped-upgrade-360545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:36 UTC │ 06 Oct 25 19:36 UTC │
	│ start   │ -p running-upgrade-462878 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-462878    │ jenkins │ v1.32.0 │ 06 Oct 25 19:36 UTC │ 06 Oct 25 19:37 UTC │
	│ start   │ -p running-upgrade-462878 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-462878    │ jenkins │ v1.37.0 │ 06 Oct 25 19:37 UTC │ 06 Oct 25 19:37 UTC │
	│ delete  │ -p running-upgrade-462878                                                                                                                │ running-upgrade-462878    │ jenkins │ v1.37.0 │ 06 Oct 25 19:37 UTC │ 06 Oct 25 19:37 UTC │
	│ start   │ -p pause-719933 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-719933              │ jenkins │ v1.37.0 │ 06 Oct 25 19:37 UTC │ 06 Oct 25 19:38 UTC │
	│ start   │ -p pause-719933 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-719933              │ jenkins │ v1.37.0 │ 06 Oct 25 19:38 UTC │ 06 Oct 25 19:39 UTC │
	│ pause   │ -p pause-719933 --alsologtostderr -v=5                                                                                                   │ pause-719933              │ jenkins │ v1.37.0 │ 06 Oct 25 19:39 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 19:38:43
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 19:38:43.477582  164904 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:38:43.477705  164904 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:38:43.477715  164904 out.go:374] Setting ErrFile to fd 2...
	I1006 19:38:43.477720  164904 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:38:43.477958  164904 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:38:43.478306  164904 out.go:368] Setting JSON to false
	I1006 19:38:43.479256  164904 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4859,"bootTime":1759774665,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1006 19:38:43.479338  164904 start.go:140] virtualization:  
	I1006 19:38:43.484648  164904 out.go:179] * [pause-719933] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 19:38:43.487922  164904 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 19:38:43.487971  164904 notify.go:220] Checking for updates...
	I1006 19:38:43.493786  164904 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 19:38:43.497184  164904 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:38:43.500213  164904 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	I1006 19:38:43.504201  164904 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 19:38:43.507310  164904 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 19:38:43.511671  164904 config.go:182] Loaded profile config "pause-719933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:38:43.512890  164904 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 19:38:43.536607  164904 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 19:38:43.536737  164904 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:38:43.597902  164904 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-06 19:38:43.58827506 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:38:43.598016  164904 docker.go:318] overlay module found
	I1006 19:38:43.601196  164904 out.go:179] * Using the docker driver based on existing profile
	I1006 19:38:43.604016  164904 start.go:304] selected driver: docker
	I1006 19:38:43.604038  164904 start.go:924] validating driver "docker" against &{Name:pause-719933 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-719933 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:38:43.604185  164904 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 19:38:43.604299  164904 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:38:43.682936  164904 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-06 19:38:43.673378079 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:38:43.683413  164904 cni.go:84] Creating CNI manager for ""
	I1006 19:38:43.683476  164904 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:38:43.683523  164904 start.go:348] cluster config:
	{Name:pause-719933 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-719933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:38:43.686987  164904 out.go:179] * Starting "pause-719933" primary control-plane node in "pause-719933" cluster
	I1006 19:38:43.689859  164904 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 19:38:43.692949  164904 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 19:38:43.695959  164904 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:38:43.696018  164904 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1006 19:38:43.696042  164904 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 19:38:43.696050  164904 cache.go:58] Caching tarball of preloaded images
	I1006 19:38:43.696145  164904 preload.go:233] Found /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1006 19:38:43.696156  164904 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 19:38:43.696296  164904 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/pause-719933/config.json ...
	I1006 19:38:43.715646  164904 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 19:38:43.715668  164904 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 19:38:43.715681  164904 cache.go:232] Successfully downloaded all kic artifacts
	I1006 19:38:43.715731  164904 start.go:360] acquireMachinesLock for pause-719933: {Name:mkc41d7470fc9b98864ba4a88dcf841a69abe0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 19:38:43.715791  164904 start.go:364] duration metric: took 36.3µs to acquireMachinesLock for "pause-719933"
	I1006 19:38:43.715816  164904 start.go:96] Skipping create...Using existing machine configuration
	I1006 19:38:43.715828  164904 fix.go:54] fixHost starting: 
	I1006 19:38:43.716086  164904 cli_runner.go:164] Run: docker container inspect pause-719933 --format={{.State.Status}}
	I1006 19:38:43.733099  164904 fix.go:112] recreateIfNeeded on pause-719933: state=Running err=<nil>
	W1006 19:38:43.733129  164904 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 19:38:42.581512  149088 cri.go:89] found id: ""
	I1006 19:38:42.581536  149088 logs.go:282] 0 containers: []
	W1006 19:38:42.581545  149088 logs.go:284] No container was found matching "storage-provisioner"
	I1006 19:38:42.581554  149088 logs.go:123] Gathering logs for kubelet ...
	I1006 19:38:42.581565  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 19:38:42.704247  149088 logs.go:123] Gathering logs for dmesg ...
	I1006 19:38:42.704282  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 19:38:42.719016  149088 logs.go:123] Gathering logs for describe nodes ...
	I1006 19:38:42.719044  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 19:38:42.788175  149088 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 19:38:42.788198  149088 logs.go:123] Gathering logs for kube-apiserver [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10] ...
	I1006 19:38:42.788212  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:38:42.822188  149088 logs.go:123] Gathering logs for kube-controller-manager [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9] ...
	I1006 19:38:42.822216  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:38:42.847197  149088 logs.go:123] Gathering logs for CRI-O ...
	I1006 19:38:42.847224  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 19:38:42.891172  149088 logs.go:123] Gathering logs for container status ...
	I1006 19:38:42.891208  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 19:38:45.430741  149088 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1006 19:38:45.431938  149088 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1006 19:38:45.432502  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 19:38:45.432594  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 19:38:45.484107  149088 cri.go:89] found id: "07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:38:45.484133  149088 cri.go:89] found id: ""
	I1006 19:38:45.484142  149088 logs.go:282] 1 containers: [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10]
	I1006 19:38:45.484202  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:38:45.489158  149088 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 19:38:45.489247  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 19:38:45.529844  149088 cri.go:89] found id: ""
	I1006 19:38:45.529876  149088 logs.go:282] 0 containers: []
	W1006 19:38:45.529885  149088 logs.go:284] No container was found matching "etcd"
	I1006 19:38:45.529891  149088 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 19:38:45.529952  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 19:38:45.560766  149088 cri.go:89] found id: ""
	I1006 19:38:45.560791  149088 logs.go:282] 0 containers: []
	W1006 19:38:45.560807  149088 logs.go:284] No container was found matching "coredns"
	I1006 19:38:45.560814  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 19:38:45.560876  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 19:38:45.589975  149088 cri.go:89] found id: ""
	I1006 19:38:45.590002  149088 logs.go:282] 0 containers: []
	W1006 19:38:45.590012  149088 logs.go:284] No container was found matching "kube-scheduler"
	I1006 19:38:45.590019  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 19:38:45.590125  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 19:38:45.618870  149088 cri.go:89] found id: ""
	I1006 19:38:45.618894  149088 logs.go:282] 0 containers: []
	W1006 19:38:45.618903  149088 logs.go:284] No container was found matching "kube-proxy"
	I1006 19:38:45.618909  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 19:38:45.618972  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 19:38:45.647029  149088 cri.go:89] found id: "fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:38:45.647049  149088 cri.go:89] found id: ""
	I1006 19:38:45.647057  149088 logs.go:282] 1 containers: [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9]
	I1006 19:38:45.647113  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:38:45.650829  149088 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 19:38:45.650903  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 19:38:45.679342  149088 cri.go:89] found id: ""
	I1006 19:38:45.679365  149088 logs.go:282] 0 containers: []
	W1006 19:38:45.679374  149088 logs.go:284] No container was found matching "kindnet"
	I1006 19:38:45.679381  149088 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1006 19:38:45.679440  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1006 19:38:45.705051  149088 cri.go:89] found id: ""
	I1006 19:38:45.705131  149088 logs.go:282] 0 containers: []
	W1006 19:38:45.705147  149088 logs.go:284] No container was found matching "storage-provisioner"
	I1006 19:38:45.705156  149088 logs.go:123] Gathering logs for dmesg ...
	I1006 19:38:45.705168  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 19:38:45.720065  149088 logs.go:123] Gathering logs for describe nodes ...
	I1006 19:38:45.720095  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 19:38:45.797275  149088 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 19:38:45.797338  149088 logs.go:123] Gathering logs for kube-apiserver [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10] ...
	I1006 19:38:45.797360  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:38:45.828598  149088 logs.go:123] Gathering logs for kube-controller-manager [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9] ...
	I1006 19:38:45.828631  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:38:45.858392  149088 logs.go:123] Gathering logs for CRI-O ...
	I1006 19:38:45.858422  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 19:38:45.902970  149088 logs.go:123] Gathering logs for container status ...
	I1006 19:38:45.903006  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 19:38:45.932364  149088 logs.go:123] Gathering logs for kubelet ...
	I1006 19:38:45.932396  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 19:38:43.736384  164904 out.go:252] * Updating the running docker "pause-719933" container ...
	I1006 19:38:43.736438  164904 machine.go:93] provisionDockerMachine start ...
	I1006 19:38:43.736517  164904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-719933
	I1006 19:38:43.753657  164904 main.go:141] libmachine: Using SSH client type: native
	I1006 19:38:43.754013  164904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33025 <nil> <nil>}
	I1006 19:38:43.754031  164904 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 19:38:43.887213  164904 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-719933
	
	I1006 19:38:43.887238  164904 ubuntu.go:182] provisioning hostname "pause-719933"
	I1006 19:38:43.887335  164904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-719933
	I1006 19:38:43.905038  164904 main.go:141] libmachine: Using SSH client type: native
	I1006 19:38:43.905356  164904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33025 <nil> <nil>}
	I1006 19:38:43.905371  164904 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-719933 && echo "pause-719933" | sudo tee /etc/hostname
	I1006 19:38:44.049845  164904 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-719933
	
	I1006 19:38:44.049929  164904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-719933
	I1006 19:38:44.067969  164904 main.go:141] libmachine: Using SSH client type: native
	I1006 19:38:44.068283  164904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33025 <nil> <nil>}
	I1006 19:38:44.068307  164904 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-719933' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-719933/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-719933' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 19:38:44.204018  164904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 19:38:44.204046  164904 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-2540/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-2540/.minikube}
	I1006 19:38:44.204065  164904 ubuntu.go:190] setting up certificates
	I1006 19:38:44.204073  164904 provision.go:84] configureAuth start
	I1006 19:38:44.204130  164904 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-719933
	I1006 19:38:44.221728  164904 provision.go:143] copyHostCerts
	I1006 19:38:44.221800  164904 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem, removing ...
	I1006 19:38:44.221819  164904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem
	I1006 19:38:44.221895  164904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem (1082 bytes)
	I1006 19:38:44.222000  164904 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem, removing ...
	I1006 19:38:44.222011  164904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem
	I1006 19:38:44.222037  164904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem (1123 bytes)
	I1006 19:38:44.222097  164904 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem, removing ...
	I1006 19:38:44.222105  164904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem
	I1006 19:38:44.222128  164904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem (1675 bytes)
	I1006 19:38:44.222178  164904 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem org=jenkins.pause-719933 san=[127.0.0.1 192.168.85.2 localhost minikube pause-719933]
	I1006 19:38:44.609082  164904 provision.go:177] copyRemoteCerts
	I1006 19:38:44.609151  164904 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 19:38:44.609195  164904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-719933
	I1006 19:38:44.627376  164904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33025 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/pause-719933/id_rsa Username:docker}
	I1006 19:38:44.723574  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 19:38:44.743133  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1006 19:38:44.761500  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 19:38:44.779737  164904 provision.go:87] duration metric: took 575.640319ms to configureAuth
	I1006 19:38:44.779806  164904 ubuntu.go:206] setting minikube options for container-runtime
	I1006 19:38:44.780054  164904 config.go:182] Loaded profile config "pause-719933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:38:44.780171  164904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-719933
	I1006 19:38:44.797965  164904 main.go:141] libmachine: Using SSH client type: native
	I1006 19:38:44.798281  164904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33025 <nil> <nil>}
	I1006 19:38:44.798301  164904 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 19:38:50.140091  164904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 19:38:50.140112  164904 machine.go:96] duration metric: took 6.403665399s to provisionDockerMachine
	I1006 19:38:50.140123  164904 start.go:293] postStartSetup for "pause-719933" (driver="docker")
	I1006 19:38:50.140133  164904 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 19:38:50.140213  164904 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 19:38:50.140258  164904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-719933
	I1006 19:38:50.158784  164904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33025 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/pause-719933/id_rsa Username:docker}
	I1006 19:38:50.255891  164904 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 19:38:50.259666  164904 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 19:38:50.259732  164904 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 19:38:50.259751  164904 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/addons for local assets ...
	I1006 19:38:50.259819  164904 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/files for local assets ...
	I1006 19:38:50.259937  164904 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> 43502.pem in /etc/ssl/certs
	I1006 19:38:50.260058  164904 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 19:38:50.268016  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:38:50.285949  164904 start.go:296] duration metric: took 145.811356ms for postStartSetup
	I1006 19:38:50.286039  164904 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 19:38:50.286107  164904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-719933
	I1006 19:38:50.303731  164904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33025 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/pause-719933/id_rsa Username:docker}
	I1006 19:38:50.397121  164904 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 19:38:50.402166  164904 fix.go:56] duration metric: took 6.686336212s for fixHost
	I1006 19:38:50.402190  164904 start.go:83] releasing machines lock for "pause-719933", held for 6.686385812s
	I1006 19:38:50.402264  164904 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-719933
	I1006 19:38:50.421320  164904 ssh_runner.go:195] Run: cat /version.json
	I1006 19:38:50.421362  164904 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 19:38:50.421380  164904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-719933
	I1006 19:38:50.421427  164904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-719933
	I1006 19:38:50.447904  164904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33025 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/pause-719933/id_rsa Username:docker}
	I1006 19:38:50.453267  164904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33025 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/pause-719933/id_rsa Username:docker}
	I1006 19:38:50.629030  164904 ssh_runner.go:195] Run: systemctl --version
	I1006 19:38:50.635676  164904 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 19:38:50.679927  164904 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 19:38:50.684823  164904 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 19:38:50.684889  164904 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 19:38:50.692757  164904 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 19:38:50.692822  164904 start.go:495] detecting cgroup driver to use...
	I1006 19:38:50.692865  164904 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 19:38:50.692917  164904 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 19:38:50.708401  164904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 19:38:50.721398  164904 docker.go:218] disabling cri-docker service (if available) ...
	I1006 19:38:50.721462  164904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 19:38:50.736968  164904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 19:38:50.750488  164904 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 19:38:50.883597  164904 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 19:38:51.024455  164904 docker.go:234] disabling docker service ...
	I1006 19:38:51.024533  164904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 19:38:51.040735  164904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 19:38:51.054865  164904 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 19:38:51.192686  164904 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 19:38:51.323752  164904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 19:38:51.337406  164904 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 19:38:51.351857  164904 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 19:38:51.351932  164904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:38:51.360936  164904 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 19:38:51.361047  164904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:38:51.370073  164904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:38:51.378801  164904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:38:51.387775  164904 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 19:38:51.396313  164904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:38:51.405761  164904 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:38:51.414164  164904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:38:51.423136  164904 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 19:38:51.430757  164904 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 19:38:51.438327  164904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:38:51.568515  164904 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 19:38:51.784930  164904 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 19:38:51.785019  164904 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 19:38:51.792122  164904 start.go:563] Will wait 60s for crictl version
	I1006 19:38:51.792216  164904 ssh_runner.go:195] Run: which crictl
	I1006 19:38:51.797931  164904 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 19:38:51.836906  164904 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 19:38:51.837016  164904 ssh_runner.go:195] Run: crio --version
	I1006 19:38:51.881411  164904 ssh_runner.go:195] Run: crio --version
	I1006 19:38:51.931437  164904 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 19:38:48.553703  149088 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1006 19:38:48.554089  149088 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1006 19:38:48.554135  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 19:38:48.554191  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 19:38:48.580220  149088 cri.go:89] found id: "07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:38:48.580244  149088 cri.go:89] found id: ""
	I1006 19:38:48.580256  149088 logs.go:282] 1 containers: [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10]
	I1006 19:38:48.580340  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:38:48.584419  149088 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 19:38:48.584490  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 19:38:48.612911  149088 cri.go:89] found id: ""
	I1006 19:38:48.612936  149088 logs.go:282] 0 containers: []
	W1006 19:38:48.612945  149088 logs.go:284] No container was found matching "etcd"
	I1006 19:38:48.612951  149088 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 19:38:48.613052  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 19:38:48.646669  149088 cri.go:89] found id: ""
	I1006 19:38:48.646694  149088 logs.go:282] 0 containers: []
	W1006 19:38:48.646703  149088 logs.go:284] No container was found matching "coredns"
	I1006 19:38:48.646710  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 19:38:48.646766  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 19:38:48.672750  149088 cri.go:89] found id: ""
	I1006 19:38:48.672806  149088 logs.go:282] 0 containers: []
	W1006 19:38:48.672815  149088 logs.go:284] No container was found matching "kube-scheduler"
	I1006 19:38:48.672822  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 19:38:48.672884  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 19:38:48.699107  149088 cri.go:89] found id: ""
	I1006 19:38:48.699137  149088 logs.go:282] 0 containers: []
	W1006 19:38:48.699146  149088 logs.go:284] No container was found matching "kube-proxy"
	I1006 19:38:48.699152  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 19:38:48.699228  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 19:38:48.727536  149088 cri.go:89] found id: "fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:38:48.727559  149088 cri.go:89] found id: ""
	I1006 19:38:48.727567  149088 logs.go:282] 1 containers: [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9]
	I1006 19:38:48.727622  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:38:48.731259  149088 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 19:38:48.731357  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 19:38:48.761368  149088 cri.go:89] found id: ""
	I1006 19:38:48.761402  149088 logs.go:282] 0 containers: []
	W1006 19:38:48.761411  149088 logs.go:284] No container was found matching "kindnet"
	I1006 19:38:48.761417  149088 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1006 19:38:48.761476  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1006 19:38:48.788183  149088 cri.go:89] found id: ""
	I1006 19:38:48.788259  149088 logs.go:282] 0 containers: []
	W1006 19:38:48.788283  149088 logs.go:284] No container was found matching "storage-provisioner"
	I1006 19:38:48.788300  149088 logs.go:123] Gathering logs for kubelet ...
	I1006 19:38:48.788312  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 19:38:48.906900  149088 logs.go:123] Gathering logs for dmesg ...
	I1006 19:38:48.906935  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 19:38:48.921944  149088 logs.go:123] Gathering logs for describe nodes ...
	I1006 19:38:48.921973  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 19:38:48.993102  149088 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 19:38:48.993133  149088 logs.go:123] Gathering logs for kube-apiserver [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10] ...
	I1006 19:38:48.993146  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:38:49.025513  149088 logs.go:123] Gathering logs for kube-controller-manager [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9] ...
	I1006 19:38:49.025548  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:38:49.051634  149088 logs.go:123] Gathering logs for CRI-O ...
	I1006 19:38:49.051664  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 19:38:49.095558  149088 logs.go:123] Gathering logs for container status ...
	I1006 19:38:49.095591  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 19:38:51.626832  149088 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1006 19:38:51.627204  149088 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1006 19:38:51.627242  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 19:38:51.627294  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 19:38:51.674791  149088 cri.go:89] found id: "07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:38:51.674809  149088 cri.go:89] found id: ""
	I1006 19:38:51.674817  149088 logs.go:282] 1 containers: [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10]
	I1006 19:38:51.674878  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:38:51.679072  149088 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 19:38:51.679137  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 19:38:51.711667  149088 cri.go:89] found id: ""
	I1006 19:38:51.711687  149088 logs.go:282] 0 containers: []
	W1006 19:38:51.711731  149088 logs.go:284] No container was found matching "etcd"
	I1006 19:38:51.711739  149088 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 19:38:51.711793  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 19:38:51.742183  149088 cri.go:89] found id: ""
	I1006 19:38:51.742204  149088 logs.go:282] 0 containers: []
	W1006 19:38:51.742213  149088 logs.go:284] No container was found matching "coredns"
	I1006 19:38:51.742219  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 19:38:51.742274  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 19:38:51.775009  149088 cri.go:89] found id: ""
	I1006 19:38:51.775030  149088 logs.go:282] 0 containers: []
	W1006 19:38:51.775039  149088 logs.go:284] No container was found matching "kube-scheduler"
	I1006 19:38:51.775044  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 19:38:51.775121  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 19:38:51.813652  149088 cri.go:89] found id: ""
	I1006 19:38:51.813676  149088 logs.go:282] 0 containers: []
	W1006 19:38:51.813684  149088 logs.go:284] No container was found matching "kube-proxy"
	I1006 19:38:51.813691  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 19:38:51.813755  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 19:38:51.847340  149088 cri.go:89] found id: "fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:38:51.847359  149088 cri.go:89] found id: ""
	I1006 19:38:51.847366  149088 logs.go:282] 1 containers: [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9]
	I1006 19:38:51.847421  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:38:51.851828  149088 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 19:38:51.851898  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 19:38:51.884822  149088 cri.go:89] found id: ""
	I1006 19:38:51.884844  149088 logs.go:282] 0 containers: []
	W1006 19:38:51.884853  149088 logs.go:284] No container was found matching "kindnet"
	I1006 19:38:51.884859  149088 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1006 19:38:51.885005  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1006 19:38:51.929191  149088 cri.go:89] found id: ""
	I1006 19:38:51.929215  149088 logs.go:282] 0 containers: []
	W1006 19:38:51.929223  149088 logs.go:284] No container was found matching "storage-provisioner"
	I1006 19:38:51.929244  149088 logs.go:123] Gathering logs for kubelet ...
	I1006 19:38:51.929273  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 19:38:52.069115  149088 logs.go:123] Gathering logs for dmesg ...
	I1006 19:38:52.069163  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 19:38:52.085485  149088 logs.go:123] Gathering logs for describe nodes ...
	I1006 19:38:52.085518  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 19:38:52.197336  149088 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 19:38:52.197358  149088 logs.go:123] Gathering logs for kube-apiserver [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10] ...
	I1006 19:38:52.197371  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:38:52.255472  149088 logs.go:123] Gathering logs for kube-controller-manager [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9] ...
	I1006 19:38:52.255513  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:38:52.316170  149088 logs.go:123] Gathering logs for CRI-O ...
	I1006 19:38:52.316208  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 19:38:52.380847  149088 logs.go:123] Gathering logs for container status ...
	I1006 19:38:52.380898  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 19:38:51.934497  164904 cli_runner.go:164] Run: docker network inspect pause-719933 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:38:51.974693  164904 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1006 19:38:51.978878  164904 kubeadm.go:883] updating cluster {Name:pause-719933 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-719933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 19:38:51.979071  164904 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:38:51.979125  164904 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:38:52.028416  164904 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:38:52.028438  164904 crio.go:433] Images already preloaded, skipping extraction
	I1006 19:38:52.028501  164904 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:38:52.059388  164904 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:38:52.059408  164904 cache_images.go:85] Images are preloaded, skipping loading
	I1006 19:38:52.059417  164904 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1006 19:38:52.059554  164904 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-719933 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-719933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 19:38:52.059633  164904 ssh_runner.go:195] Run: crio config
	I1006 19:38:52.143993  164904 cni.go:84] Creating CNI manager for ""
	I1006 19:38:52.144073  164904 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:38:52.144109  164904 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 19:38:52.144167  164904 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-719933 NodeName:pause-719933 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 19:38:52.144338  164904 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-719933"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 19:38:52.144476  164904 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 19:38:52.154427  164904 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 19:38:52.154540  164904 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 19:38:52.163425  164904 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1006 19:38:52.179207  164904 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 19:38:52.196539  164904 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1006 19:38:52.214653  164904 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1006 19:38:52.219556  164904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:38:52.410566  164904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:38:52.425665  164904 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/pause-719933 for IP: 192.168.85.2
	I1006 19:38:52.425738  164904 certs.go:195] generating shared ca certs ...
	I1006 19:38:52.425768  164904 certs.go:227] acquiring lock for ca certs: {Name:mke29bef1b13829052576090d66d8864f7cbc64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:38:52.425943  164904 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key
	I1006 19:38:52.426018  164904 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key
	I1006 19:38:52.426053  164904 certs.go:257] generating profile certs ...
	I1006 19:38:52.426171  164904 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/pause-719933/client.key
	I1006 19:38:52.426275  164904 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/pause-719933/apiserver.key.cfa34ef1
	I1006 19:38:52.426343  164904 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/pause-719933/proxy-client.key
	I1006 19:38:52.426480  164904 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem (1338 bytes)
	W1006 19:38:52.426534  164904 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350_empty.pem, impossibly tiny 0 bytes
	I1006 19:38:52.426557  164904 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 19:38:52.426613  164904 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem (1082 bytes)
	I1006 19:38:52.426674  164904 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem (1123 bytes)
	I1006 19:38:52.426729  164904 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem (1675 bytes)
	I1006 19:38:52.426798  164904 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:38:52.427488  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 19:38:52.451675  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 19:38:52.471905  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 19:38:52.492044  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 19:38:52.510525  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/pause-719933/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1006 19:38:52.528762  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/pause-719933/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 19:38:52.547559  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/pause-719933/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 19:38:52.566221  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/pause-719933/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 19:38:52.585912  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem --> /usr/share/ca-certificates/4350.pem (1338 bytes)
	I1006 19:38:52.604145  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /usr/share/ca-certificates/43502.pem (1708 bytes)
	I1006 19:38:52.621401  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 19:38:52.639058  164904 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 19:38:52.653536  164904 ssh_runner.go:195] Run: openssl version
	I1006 19:38:52.660018  164904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 19:38:52.668556  164904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:38:52.672317  164904 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:38:52.672402  164904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:38:52.713259  164904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 19:38:52.721146  164904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4350.pem && ln -fs /usr/share/ca-certificates/4350.pem /etc/ssl/certs/4350.pem"
	I1006 19:38:52.729226  164904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4350.pem
	I1006 19:38:52.732808  164904 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 18:49 /usr/share/ca-certificates/4350.pem
	I1006 19:38:52.732921  164904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4350.pem
	I1006 19:38:52.773909  164904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4350.pem /etc/ssl/certs/51391683.0"
	I1006 19:38:52.781963  164904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43502.pem && ln -fs /usr/share/ca-certificates/43502.pem /etc/ssl/certs/43502.pem"
	I1006 19:38:52.790340  164904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43502.pem
	I1006 19:38:52.794292  164904 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 18:49 /usr/share/ca-certificates/43502.pem
	I1006 19:38:52.794365  164904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43502.pem
	I1006 19:38:52.837543  164904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43502.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 19:38:52.845567  164904 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 19:38:52.849446  164904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 19:38:52.891121  164904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 19:38:52.931977  164904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 19:38:52.973173  164904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 19:38:53.015648  164904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 19:38:53.060348  164904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 19:38:53.101764  164904 kubeadm.go:400] StartCluster: {Name:pause-719933 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-719933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:38:53.101897  164904 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 19:38:53.101967  164904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 19:38:53.131946  164904 cri.go:89] found id: "6ccbb47882248567001f98caa3ed6d08c96bd67cf7e41b581b8e20288b7ba653"
	I1006 19:38:53.131969  164904 cri.go:89] found id: "611f7ec82bc68048be0ca0d31606b1d278412943b79fc8a0c4c33da5805164c5"
	I1006 19:38:53.131974  164904 cri.go:89] found id: "4d6e88c27772c35c97d8c35c87ce943957683a84df865f52d99dc5d63f21d8a0"
	I1006 19:38:53.131978  164904 cri.go:89] found id: "70f2389a27f2cceb23c12fdf02125896dba5d458c676409a98715563aad756f6"
	I1006 19:38:53.131981  164904 cri.go:89] found id: "098520989a4ccf81cd5f027022cd81c910e3d341bf7918c27926e6a914e07feb"
	I1006 19:38:53.131985  164904 cri.go:89] found id: "c7989b7481b364d561193295584c5dca811b622af9cf4e24341e42e50a91f5fc"
	I1006 19:38:53.131989  164904 cri.go:89] found id: "dbf88fd449582624356375d9d2477b46834de96fb325ba01fe1444314a62865e"
	I1006 19:38:53.131992  164904 cri.go:89] found id: ""
	I1006 19:38:53.132044  164904 ssh_runner.go:195] Run: sudo runc list -f json
	W1006 19:38:53.142937  164904 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:38:53Z" level=error msg="open /run/runc: no such file or directory"
	I1006 19:38:53.143036  164904 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 19:38:53.150964  164904 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 19:38:53.150990  164904 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 19:38:53.151042  164904 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 19:38:53.158365  164904 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 19:38:53.159045  164904 kubeconfig.go:125] found "pause-719933" server: "https://192.168.85.2:8443"
	I1006 19:38:53.159931  164904 kapi.go:59] client config for pause-719933: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-2540/.minikube/profiles/pause-719933/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-2540/.minikube/profiles/pause-719933/client.key", CAFile:"/home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 19:38:53.160418  164904 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1006 19:38:53.160438  164904 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1006 19:38:53.160444  164904 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1006 19:38:53.160449  164904 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1006 19:38:53.160454  164904 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1006 19:38:53.160728  164904 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 19:38:53.169080  164904 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1006 19:38:53.169154  164904 kubeadm.go:601] duration metric: took 18.157283ms to restartPrimaryControlPlane
	I1006 19:38:53.169173  164904 kubeadm.go:402] duration metric: took 67.4146ms to StartCluster
	I1006 19:38:53.169192  164904 settings.go:142] acquiring lock: {Name:mkaade96c66593c653256adaeb0a3029ca60e0c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:38:53.169250  164904 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:38:53.170140  164904 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/kubeconfig: {Name:mkf8cce8f9dace5c4d41967d6ad8df5e03ba53f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:38:53.170371  164904 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 19:38:53.170708  164904 config.go:182] Loaded profile config "pause-719933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:38:53.170759  164904 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 19:38:53.175905  164904 out.go:179] * Verifying Kubernetes components...
	I1006 19:38:53.175905  164904 out.go:179] * Enabled addons: 
	I1006 19:38:53.178704  164904 addons.go:514] duration metric: took 7.927778ms for enable addons: enabled=[]
	I1006 19:38:53.178743  164904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:38:53.314020  164904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:38:53.328642  164904 node_ready.go:35] waiting up to 6m0s for node "pause-719933" to be "Ready" ...
	I1006 19:38:54.945936  149088 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1006 19:38:54.946326  149088 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1006 19:38:54.946365  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 19:38:54.946419  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 19:38:54.994808  149088 cri.go:89] found id: "07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:38:54.994827  149088 cri.go:89] found id: ""
	I1006 19:38:54.994835  149088 logs.go:282] 1 containers: [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10]
	I1006 19:38:54.994898  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:38:55.004013  149088 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 19:38:55.004087  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 19:38:55.074405  149088 cri.go:89] found id: ""
	I1006 19:38:55.074427  149088 logs.go:282] 0 containers: []
	W1006 19:38:55.074435  149088 logs.go:284] No container was found matching "etcd"
	I1006 19:38:55.074442  149088 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 19:38:55.074513  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 19:38:55.139028  149088 cri.go:89] found id: ""
	I1006 19:38:55.139057  149088 logs.go:282] 0 containers: []
	W1006 19:38:55.139066  149088 logs.go:284] No container was found matching "coredns"
	I1006 19:38:55.139072  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 19:38:55.139137  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 19:38:55.182741  149088 cri.go:89] found id: ""
	I1006 19:38:55.182762  149088 logs.go:282] 0 containers: []
	W1006 19:38:55.182771  149088 logs.go:284] No container was found matching "kube-scheduler"
	I1006 19:38:55.182777  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 19:38:55.182844  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 19:38:55.242097  149088 cri.go:89] found id: ""
	I1006 19:38:55.242121  149088 logs.go:282] 0 containers: []
	W1006 19:38:55.242130  149088 logs.go:284] No container was found matching "kube-proxy"
	I1006 19:38:55.242136  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 19:38:55.242195  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 19:38:55.287213  149088 cri.go:89] found id: "fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:38:55.287233  149088 cri.go:89] found id: ""
	I1006 19:38:55.287240  149088 logs.go:282] 1 containers: [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9]
	I1006 19:38:55.287308  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:38:55.296382  149088 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 19:38:55.296520  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 19:38:55.346165  149088 cri.go:89] found id: ""
	I1006 19:38:55.346231  149088 logs.go:282] 0 containers: []
	W1006 19:38:55.346253  149088 logs.go:284] No container was found matching "kindnet"
	I1006 19:38:55.346270  149088 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1006 19:38:55.346364  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1006 19:38:55.398097  149088 cri.go:89] found id: ""
	I1006 19:38:55.398178  149088 logs.go:282] 0 containers: []
	W1006 19:38:55.398200  149088 logs.go:284] No container was found matching "storage-provisioner"
	I1006 19:38:55.398238  149088 logs.go:123] Gathering logs for container status ...
	I1006 19:38:55.398267  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 19:38:55.456319  149088 logs.go:123] Gathering logs for kubelet ...
	I1006 19:38:55.456399  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 19:38:55.641098  149088 logs.go:123] Gathering logs for dmesg ...
	I1006 19:38:55.641133  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 19:38:55.667783  149088 logs.go:123] Gathering logs for describe nodes ...
	I1006 19:38:55.667868  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 19:38:55.760859  149088 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 19:38:55.760884  149088 logs.go:123] Gathering logs for kube-apiserver [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10] ...
	I1006 19:38:55.760902  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:38:55.795615  149088 logs.go:123] Gathering logs for kube-controller-manager [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9] ...
	I1006 19:38:55.795651  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:38:55.846652  149088 logs.go:123] Gathering logs for CRI-O ...
	I1006 19:38:55.846682  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 19:38:58.407328  149088 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1006 19:38:58.407820  149088 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1006 19:38:58.407867  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 19:38:58.407924  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 19:38:58.460197  149088 cri.go:89] found id: "07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:38:58.460223  149088 cri.go:89] found id: ""
	I1006 19:38:58.460232  149088 logs.go:282] 1 containers: [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10]
	I1006 19:38:58.460295  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:38:58.468109  149088 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 19:38:58.468189  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 19:38:58.523028  149088 cri.go:89] found id: ""
	I1006 19:38:58.523056  149088 logs.go:282] 0 containers: []
	W1006 19:38:58.523065  149088 logs.go:284] No container was found matching "etcd"
	I1006 19:38:58.523072  149088 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 19:38:58.523143  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 19:38:58.570867  149088 cri.go:89] found id: ""
	I1006 19:38:58.570894  149088 logs.go:282] 0 containers: []
	W1006 19:38:58.570903  149088 logs.go:284] No container was found matching "coredns"
	I1006 19:38:58.570910  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 19:38:58.570969  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 19:38:58.637154  149088 cri.go:89] found id: ""
	I1006 19:38:58.637175  149088 logs.go:282] 0 containers: []
	W1006 19:38:58.637184  149088 logs.go:284] No container was found matching "kube-scheduler"
	I1006 19:38:58.637190  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 19:38:58.637251  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 19:38:58.677932  149088 cri.go:89] found id: ""
	I1006 19:38:58.677953  149088 logs.go:282] 0 containers: []
	W1006 19:38:58.677961  149088 logs.go:284] No container was found matching "kube-proxy"
	I1006 19:38:58.677968  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 19:38:58.678029  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 19:38:58.722029  149088 cri.go:89] found id: "fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:38:58.722049  149088 cri.go:89] found id: ""
	I1006 19:38:58.722057  149088 logs.go:282] 1 containers: [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9]
	I1006 19:38:58.722116  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:38:58.726350  149088 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 19:38:58.726422  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 19:38:58.763232  149088 cri.go:89] found id: ""
	I1006 19:38:58.763253  149088 logs.go:282] 0 containers: []
	W1006 19:38:58.763261  149088 logs.go:284] No container was found matching "kindnet"
	I1006 19:38:58.763267  149088 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1006 19:38:58.763327  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1006 19:38:58.795852  149088 cri.go:89] found id: ""
	I1006 19:38:58.795929  149088 logs.go:282] 0 containers: []
	W1006 19:38:58.795950  149088 logs.go:284] No container was found matching "storage-provisioner"
	I1006 19:38:58.795974  149088 logs.go:123] Gathering logs for kube-apiserver [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10] ...
	I1006 19:38:58.796012  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:38:58.845457  149088 logs.go:123] Gathering logs for kube-controller-manager [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9] ...
	I1006 19:38:58.845527  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:38:58.903928  149088 logs.go:123] Gathering logs for CRI-O ...
	I1006 19:38:58.903962  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 19:38:58.968567  149088 logs.go:123] Gathering logs for container status ...
	I1006 19:38:58.968608  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 19:38:59.039136  149088 logs.go:123] Gathering logs for kubelet ...
	I1006 19:38:59.039167  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 19:38:59.172379  149088 logs.go:123] Gathering logs for dmesg ...
	I1006 19:38:59.172419  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 19:38:59.195951  149088 logs.go:123] Gathering logs for describe nodes ...
	I1006 19:38:59.195982  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 19:38:59.284957  149088 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 19:39:01.786402  149088 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1006 19:39:01.786820  149088 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1006 19:39:01.786866  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 19:39:01.786922  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 19:39:01.823475  149088 cri.go:89] found id: "07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:39:01.823496  149088 cri.go:89] found id: ""
	I1006 19:39:01.823504  149088 logs.go:282] 1 containers: [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10]
	I1006 19:39:01.823563  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:39:01.827366  149088 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 19:39:01.827450  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 19:39:01.853207  149088 cri.go:89] found id: ""
	I1006 19:39:01.853230  149088 logs.go:282] 0 containers: []
	W1006 19:39:01.853239  149088 logs.go:284] No container was found matching "etcd"
	I1006 19:39:01.853245  149088 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 19:39:01.853304  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 19:39:01.879881  149088 cri.go:89] found id: ""
	I1006 19:39:01.879911  149088 logs.go:282] 0 containers: []
	W1006 19:39:01.879920  149088 logs.go:284] No container was found matching "coredns"
	I1006 19:39:01.879927  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 19:39:01.879988  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 19:39:01.914447  149088 cri.go:89] found id: ""
	I1006 19:39:01.914479  149088 logs.go:282] 0 containers: []
	W1006 19:39:01.914488  149088 logs.go:284] No container was found matching "kube-scheduler"
	I1006 19:39:01.914494  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 19:39:01.914552  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 19:39:01.942254  149088 cri.go:89] found id: ""
	I1006 19:39:01.942275  149088 logs.go:282] 0 containers: []
	W1006 19:39:01.942284  149088 logs.go:284] No container was found matching "kube-proxy"
	I1006 19:39:01.942291  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 19:39:01.942350  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 19:39:01.971016  149088 cri.go:89] found id: "fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:39:01.971034  149088 cri.go:89] found id: ""
	I1006 19:39:01.971042  149088 logs.go:282] 1 containers: [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9]
	I1006 19:39:01.971096  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:39:01.974882  149088 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 19:39:01.974951  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 19:39:02.002724  149088 cri.go:89] found id: ""
	I1006 19:39:02.002747  149088 logs.go:282] 0 containers: []
	W1006 19:39:02.002756  149088 logs.go:284] No container was found matching "kindnet"
	I1006 19:39:02.002763  149088 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1006 19:39:02.002823  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1006 19:39:02.033817  149088 cri.go:89] found id: ""
	I1006 19:39:02.033847  149088 logs.go:282] 0 containers: []
	W1006 19:39:02.033856  149088 logs.go:284] No container was found matching "storage-provisioner"
	I1006 19:39:02.033866  149088 logs.go:123] Gathering logs for container status ...
	I1006 19:39:02.033896  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 19:39:02.079364  149088 logs.go:123] Gathering logs for kubelet ...
	I1006 19:39:02.079446  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 19:39:02.210537  149088 logs.go:123] Gathering logs for dmesg ...
	I1006 19:39:02.210574  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 19:39:02.225619  149088 logs.go:123] Gathering logs for describe nodes ...
	I1006 19:39:02.225650  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 19:39:02.297678  149088 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 19:39:02.297697  149088 logs.go:123] Gathering logs for kube-apiserver [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10] ...
	I1006 19:39:02.297710  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:39:02.334962  149088 logs.go:123] Gathering logs for kube-controller-manager [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9] ...
	I1006 19:39:02.334990  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:39:02.369633  149088 logs.go:123] Gathering logs for CRI-O ...
	I1006 19:39:02.369662  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 19:38:59.467596  164904 node_ready.go:49] node "pause-719933" is "Ready"
	I1006 19:38:59.467622  164904 node_ready.go:38] duration metric: took 6.138950073s for node "pause-719933" to be "Ready" ...
	I1006 19:38:59.467634  164904 api_server.go:52] waiting for apiserver process to appear ...
	I1006 19:38:59.467693  164904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 19:38:59.483641  164904 api_server.go:72] duration metric: took 6.313232444s to wait for apiserver process to appear ...
	I1006 19:38:59.483662  164904 api_server.go:88] waiting for apiserver healthz status ...
	I1006 19:38:59.483681  164904 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1006 19:38:59.522135  164904 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1006 19:38:59.522237  164904 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1006 19:38:59.983776  164904 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1006 19:38:59.996688  164904 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1006 19:38:59.996784  164904 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1006 19:39:00.484547  164904 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1006 19:39:00.496183  164904 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1006 19:39:00.496217  164904 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1006 19:39:00.983836  164904 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1006 19:39:00.992118  164904 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1006 19:39:00.993344  164904 api_server.go:141] control plane version: v1.34.1
	I1006 19:39:00.993373  164904 api_server.go:131] duration metric: took 1.509703027s to wait for apiserver health ...
	I1006 19:39:00.993398  164904 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 19:39:00.997073  164904 system_pods.go:59] 7 kube-system pods found
	I1006 19:39:00.997112  164904 system_pods.go:61] "coredns-66bc5c9577-b49dq" [cd7b9c84-825b-4c88-9282-6ab75d1df072] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:39:00.997121  164904 system_pods.go:61] "etcd-pause-719933" [5c5d8626-929a-4e13-804f-5f7c96f99727] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 19:39:00.997127  164904 system_pods.go:61] "kindnet-g6m52" [af518d42-83f8-4dc8-95ad-ae6659a36a4b] Running
	I1006 19:39:00.997134  164904 system_pods.go:61] "kube-apiserver-pause-719933" [3018ee16-ba2d-4552-9833-63982cb79f6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 19:39:00.997143  164904 system_pods.go:61] "kube-controller-manager-pause-719933" [98dfb5e3-bd9a-45f2-a211-b83d28d1759f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 19:39:00.997153  164904 system_pods.go:61] "kube-proxy-jq5mn" [e0bdba86-2eef-494a-a380-06b1e0a60cdf] Running
	I1006 19:39:00.997159  164904 system_pods.go:61] "kube-scheduler-pause-719933" [2225abd6-ba17-40ca-be55-5176b9cfe17a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 19:39:00.997177  164904 system_pods.go:74] duration metric: took 3.764082ms to wait for pod list to return data ...
	I1006 19:39:00.997186  164904 default_sa.go:34] waiting for default service account to be created ...
	I1006 19:39:00.999869  164904 default_sa.go:45] found service account: "default"
	I1006 19:39:00.999898  164904 default_sa.go:55] duration metric: took 2.703369ms for default service account to be created ...
	I1006 19:39:00.999908  164904 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 19:39:01.002691  164904 system_pods.go:86] 7 kube-system pods found
	I1006 19:39:01.002723  164904 system_pods.go:89] "coredns-66bc5c9577-b49dq" [cd7b9c84-825b-4c88-9282-6ab75d1df072] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:39:01.002757  164904 system_pods.go:89] "etcd-pause-719933" [5c5d8626-929a-4e13-804f-5f7c96f99727] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 19:39:01.002773  164904 system_pods.go:89] "kindnet-g6m52" [af518d42-83f8-4dc8-95ad-ae6659a36a4b] Running
	I1006 19:39:01.002780  164904 system_pods.go:89] "kube-apiserver-pause-719933" [3018ee16-ba2d-4552-9833-63982cb79f6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 19:39:01.002787  164904 system_pods.go:89] "kube-controller-manager-pause-719933" [98dfb5e3-bd9a-45f2-a211-b83d28d1759f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 19:39:01.002795  164904 system_pods.go:89] "kube-proxy-jq5mn" [e0bdba86-2eef-494a-a380-06b1e0a60cdf] Running
	I1006 19:39:01.002803  164904 system_pods.go:89] "kube-scheduler-pause-719933" [2225abd6-ba17-40ca-be55-5176b9cfe17a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 19:39:01.002816  164904 system_pods.go:126] duration metric: took 2.902245ms to wait for k8s-apps to be running ...
	I1006 19:39:01.002846  164904 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 19:39:01.002916  164904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:39:01.029488  164904 system_svc.go:56] duration metric: took 26.633148ms WaitForService to wait for kubelet
	I1006 19:39:01.029512  164904 kubeadm.go:586] duration metric: took 7.859108918s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 19:39:01.029532  164904 node_conditions.go:102] verifying NodePressure condition ...
	I1006 19:39:01.032300  164904 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 19:39:01.032337  164904 node_conditions.go:123] node cpu capacity is 2
	I1006 19:39:01.032351  164904 node_conditions.go:105] duration metric: took 2.812194ms to run NodePressure ...
	I1006 19:39:01.032364  164904 start.go:241] waiting for startup goroutines ...
	I1006 19:39:01.032372  164904 start.go:246] waiting for cluster config update ...
	I1006 19:39:01.032380  164904 start.go:255] writing updated cluster config ...
	I1006 19:39:01.032703  164904 ssh_runner.go:195] Run: rm -f paused
	I1006 19:39:01.036251  164904 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 19:39:01.037016  164904 kapi.go:59] client config for pause-719933: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-2540/.minikube/profiles/pause-719933/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-2540/.minikube/profiles/pause-719933/client.key", CAFile:"/home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 19:39:01.039941  164904 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-b49dq" in "kube-system" namespace to be "Ready" or be gone ...
	W1006 19:39:03.045579  164904 pod_ready.go:104] pod "coredns-66bc5c9577-b49dq" is not "Ready", error: <nil>
	I1006 19:39:04.920578  149088 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1006 19:39:04.921008  149088 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1006 19:39:04.921068  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 19:39:04.921128  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 19:39:04.952306  149088 cri.go:89] found id: "07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:39:04.952331  149088 cri.go:89] found id: ""
	I1006 19:39:04.952350  149088 logs.go:282] 1 containers: [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10]
	I1006 19:39:04.952413  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:39:04.956155  149088 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 19:39:04.956231  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 19:39:04.983490  149088 cri.go:89] found id: ""
	I1006 19:39:04.983514  149088 logs.go:282] 0 containers: []
	W1006 19:39:04.983522  149088 logs.go:284] No container was found matching "etcd"
	I1006 19:39:04.983529  149088 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 19:39:04.983590  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 19:39:05.010329  149088 cri.go:89] found id: ""
	I1006 19:39:05.010352  149088 logs.go:282] 0 containers: []
	W1006 19:39:05.010360  149088 logs.go:284] No container was found matching "coredns"
	I1006 19:39:05.010370  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 19:39:05.010428  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 19:39:05.041247  149088 cri.go:89] found id: ""
	I1006 19:39:05.041274  149088 logs.go:282] 0 containers: []
	W1006 19:39:05.041283  149088 logs.go:284] No container was found matching "kube-scheduler"
	I1006 19:39:05.041289  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 19:39:05.041350  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 19:39:05.071162  149088 cri.go:89] found id: ""
	I1006 19:39:05.071188  149088 logs.go:282] 0 containers: []
	W1006 19:39:05.071197  149088 logs.go:284] No container was found matching "kube-proxy"
	I1006 19:39:05.071203  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 19:39:05.071262  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 19:39:05.098171  149088 cri.go:89] found id: "fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:39:05.098202  149088 cri.go:89] found id: ""
	I1006 19:39:05.098213  149088 logs.go:282] 1 containers: [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9]
	I1006 19:39:05.098272  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:39:05.102425  149088 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 19:39:05.102504  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 19:39:05.130211  149088 cri.go:89] found id: ""
	I1006 19:39:05.130289  149088 logs.go:282] 0 containers: []
	W1006 19:39:05.130312  149088 logs.go:284] No container was found matching "kindnet"
	I1006 19:39:05.130330  149088 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1006 19:39:05.130418  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1006 19:39:05.157818  149088 cri.go:89] found id: ""
	I1006 19:39:05.157842  149088 logs.go:282] 0 containers: []
	W1006 19:39:05.157850  149088 logs.go:284] No container was found matching "storage-provisioner"
	I1006 19:39:05.157858  149088 logs.go:123] Gathering logs for kube-apiserver [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10] ...
	I1006 19:39:05.157877  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:39:05.189016  149088 logs.go:123] Gathering logs for kube-controller-manager [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9] ...
	I1006 19:39:05.189073  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:39:05.217093  149088 logs.go:123] Gathering logs for CRI-O ...
	I1006 19:39:05.217116  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 19:39:05.260125  149088 logs.go:123] Gathering logs for container status ...
	I1006 19:39:05.260157  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 19:39:05.292615  149088 logs.go:123] Gathering logs for kubelet ...
	I1006 19:39:05.292642  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 19:39:05.421925  149088 logs.go:123] Gathering logs for dmesg ...
	I1006 19:39:05.421962  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 19:39:05.437176  149088 logs.go:123] Gathering logs for describe nodes ...
	I1006 19:39:05.437208  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 19:39:05.511755  149088 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1006 19:39:05.047746  164904 pod_ready.go:104] pod "coredns-66bc5c9577-b49dq" is not "Ready", error: <nil>
	I1006 19:39:06.045913  164904 pod_ready.go:94] pod "coredns-66bc5c9577-b49dq" is "Ready"
	I1006 19:39:06.045947  164904 pod_ready.go:86] duration metric: took 5.005978s for pod "coredns-66bc5c9577-b49dq" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:39:06.049561  164904 pod_ready.go:83] waiting for pod "etcd-pause-719933" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:39:07.056430  164904 pod_ready.go:94] pod "etcd-pause-719933" is "Ready"
	I1006 19:39:07.056460  164904 pod_ready.go:86] duration metric: took 1.006867607s for pod "etcd-pause-719933" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:39:07.059170  164904 pod_ready.go:83] waiting for pod "kube-apiserver-pause-719933" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:39:07.564637  164904 pod_ready.go:94] pod "kube-apiserver-pause-719933" is "Ready"
	I1006 19:39:07.564667  164904 pod_ready.go:86] duration metric: took 505.473585ms for pod "kube-apiserver-pause-719933" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:39:07.567229  164904 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-719933" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:39:07.572475  164904 pod_ready.go:94] pod "kube-controller-manager-pause-719933" is "Ready"
	I1006 19:39:07.572503  164904 pod_ready.go:86] duration metric: took 5.246935ms for pod "kube-controller-manager-pause-719933" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:39:07.643384  164904 pod_ready.go:83] waiting for pod "kube-proxy-jq5mn" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:39:08.044119  164904 pod_ready.go:94] pod "kube-proxy-jq5mn" is "Ready"
	I1006 19:39:08.044149  164904 pod_ready.go:86] duration metric: took 400.68726ms for pod "kube-proxy-jq5mn" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:39:08.243530  164904 pod_ready.go:83] waiting for pod "kube-scheduler-pause-719933" in "kube-system" namespace to be "Ready" or be gone ...
	W1006 19:39:10.253158  164904 pod_ready.go:104] pod "kube-scheduler-pause-719933" is not "Ready", error: <nil>
	I1006 19:39:10.748546  164904 pod_ready.go:94] pod "kube-scheduler-pause-719933" is "Ready"
	I1006 19:39:10.748575  164904 pod_ready.go:86] duration metric: took 2.505009264s for pod "kube-scheduler-pause-719933" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:39:10.748587  164904 pod_ready.go:40] duration metric: took 9.712304394s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 19:39:10.801622  164904 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1006 19:39:10.804830  164904 out.go:179] * Done! kubectl is now configured to use "pause-719933" cluster and "default" namespace by default
	I1006 19:39:08.012630  149088 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1006 19:39:08.013169  149088 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1006 19:39:08.013249  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 19:39:08.013339  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 19:39:08.049149  149088 cri.go:89] found id: "07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:39:08.049179  149088 cri.go:89] found id: ""
	I1006 19:39:08.049187  149088 logs.go:282] 1 containers: [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10]
	I1006 19:39:08.049287  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:39:08.053089  149088 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 19:39:08.053205  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 19:39:08.085392  149088 cri.go:89] found id: ""
	I1006 19:39:08.085416  149088 logs.go:282] 0 containers: []
	W1006 19:39:08.085425  149088 logs.go:284] No container was found matching "etcd"
	I1006 19:39:08.085431  149088 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 19:39:08.085506  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 19:39:08.115010  149088 cri.go:89] found id: ""
	I1006 19:39:08.115035  149088 logs.go:282] 0 containers: []
	W1006 19:39:08.115067  149088 logs.go:284] No container was found matching "coredns"
	I1006 19:39:08.115102  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 19:39:08.115190  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 19:39:08.143209  149088 cri.go:89] found id: ""
	I1006 19:39:08.143235  149088 logs.go:282] 0 containers: []
	W1006 19:39:08.143244  149088 logs.go:284] No container was found matching "kube-scheduler"
	I1006 19:39:08.143250  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 19:39:08.143336  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 19:39:08.173467  149088 cri.go:89] found id: ""
	I1006 19:39:08.173490  149088 logs.go:282] 0 containers: []
	W1006 19:39:08.173499  149088 logs.go:284] No container was found matching "kube-proxy"
	I1006 19:39:08.173505  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 19:39:08.173583  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 19:39:08.200847  149088 cri.go:89] found id: "fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:39:08.200869  149088 cri.go:89] found id: ""
	I1006 19:39:08.200877  149088 logs.go:282] 1 containers: [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9]
	I1006 19:39:08.200933  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:39:08.204560  149088 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 19:39:08.204651  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 19:39:08.231899  149088 cri.go:89] found id: ""
	I1006 19:39:08.231961  149088 logs.go:282] 0 containers: []
	W1006 19:39:08.231977  149088 logs.go:284] No container was found matching "kindnet"
	I1006 19:39:08.231984  149088 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1006 19:39:08.232041  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1006 19:39:08.260519  149088 cri.go:89] found id: ""
	I1006 19:39:08.260545  149088 logs.go:282] 0 containers: []
	W1006 19:39:08.260553  149088 logs.go:284] No container was found matching "storage-provisioner"
	I1006 19:39:08.260562  149088 logs.go:123] Gathering logs for kubelet ...
	I1006 19:39:08.260592  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 19:39:08.386024  149088 logs.go:123] Gathering logs for dmesg ...
	I1006 19:39:08.386058  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 19:39:08.401867  149088 logs.go:123] Gathering logs for describe nodes ...
	I1006 19:39:08.401893  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 19:39:08.472165  149088 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 19:39:08.472225  149088 logs.go:123] Gathering logs for kube-apiserver [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10] ...
	I1006 19:39:08.472265  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:39:08.503456  149088 logs.go:123] Gathering logs for kube-controller-manager [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9] ...
	I1006 19:39:08.503483  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:39:08.529010  149088 logs.go:123] Gathering logs for CRI-O ...
	I1006 19:39:08.529036  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 19:39:08.573670  149088 logs.go:123] Gathering logs for container status ...
	I1006 19:39:08.573706  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 19:39:11.107215  149088 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1006 19:39:11.107666  149088 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1006 19:39:11.107741  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 19:39:11.107810  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 19:39:11.145197  149088 cri.go:89] found id: "07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:39:11.145217  149088 cri.go:89] found id: ""
	I1006 19:39:11.145225  149088 logs.go:282] 1 containers: [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10]
	I1006 19:39:11.145282  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:39:11.152129  149088 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 19:39:11.152204  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 19:39:11.210048  149088 cri.go:89] found id: ""
	I1006 19:39:11.210069  149088 logs.go:282] 0 containers: []
	W1006 19:39:11.210077  149088 logs.go:284] No container was found matching "etcd"
	I1006 19:39:11.210084  149088 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 19:39:11.210143  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 19:39:11.254430  149088 cri.go:89] found id: ""
	I1006 19:39:11.254511  149088 logs.go:282] 0 containers: []
	W1006 19:39:11.254535  149088 logs.go:284] No container was found matching "coredns"
	I1006 19:39:11.254556  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 19:39:11.254651  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 19:39:11.287344  149088 cri.go:89] found id: ""
	I1006 19:39:11.287365  149088 logs.go:282] 0 containers: []
	W1006 19:39:11.287373  149088 logs.go:284] No container was found matching "kube-scheduler"
	I1006 19:39:11.287380  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 19:39:11.287441  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 19:39:11.324194  149088 cri.go:89] found id: ""
	I1006 19:39:11.324216  149088 logs.go:282] 0 containers: []
	W1006 19:39:11.324224  149088 logs.go:284] No container was found matching "kube-proxy"
	I1006 19:39:11.324230  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 19:39:11.324288  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 19:39:11.356109  149088 cri.go:89] found id: "fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:39:11.356133  149088 cri.go:89] found id: ""
	I1006 19:39:11.356145  149088 logs.go:282] 1 containers: [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9]
	I1006 19:39:11.356214  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:39:11.359980  149088 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 19:39:11.360048  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 19:39:11.384975  149088 cri.go:89] found id: ""
	I1006 19:39:11.384996  149088 logs.go:282] 0 containers: []
	W1006 19:39:11.385005  149088 logs.go:284] No container was found matching "kindnet"
	I1006 19:39:11.385011  149088 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1006 19:39:11.385073  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1006 19:39:11.418804  149088 cri.go:89] found id: ""
	I1006 19:39:11.418828  149088 logs.go:282] 0 containers: []
	W1006 19:39:11.418837  149088 logs.go:284] No container was found matching "storage-provisioner"
	I1006 19:39:11.418847  149088 logs.go:123] Gathering logs for kubelet ...
	I1006 19:39:11.418892  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 19:39:11.554979  149088 logs.go:123] Gathering logs for dmesg ...
	I1006 19:39:11.555018  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 19:39:11.570301  149088 logs.go:123] Gathering logs for describe nodes ...
	I1006 19:39:11.570331  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 19:39:11.637473  149088 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 19:39:11.637494  149088 logs.go:123] Gathering logs for kube-apiserver [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10] ...
	I1006 19:39:11.637514  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:39:11.675231  149088 logs.go:123] Gathering logs for kube-controller-manager [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9] ...
	I1006 19:39:11.675263  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:39:11.706873  149088 logs.go:123] Gathering logs for CRI-O ...
	I1006 19:39:11.706909  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 19:39:11.750738  149088 logs.go:123] Gathering logs for container status ...
	I1006 19:39:11.750771  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	
	==> CRI-O <==
	Oct 06 19:38:53 pause-719933 crio[2053]: time="2025-10-06T19:38:53.759410966Z" level=info msg="Created container 893d9c10f4bd96fc39fc11cf22e3f42bed5f93fc5a7fc287a9290b2e3ff84dfe: kube-system/coredns-66bc5c9577-b49dq/coredns" id=500acf4b-9bb9-44a8-87f8-8486ba3b83cd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:38:53 pause-719933 crio[2053]: time="2025-10-06T19:38:53.76021247Z" level=info msg="Starting container: 893d9c10f4bd96fc39fc11cf22e3f42bed5f93fc5a7fc287a9290b2e3ff84dfe" id=c08cc043-3cf0-45cd-9f34-23636d3a8add name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 19:38:53 pause-719933 crio[2053]: time="2025-10-06T19:38:53.76104659Z" level=info msg="Started container" PID=2348 containerID=8e8330f950aa7b3752a5fe13d772a7e3fdbbadc6d01761c6972079bdf20a9513 description=kube-system/kube-scheduler-pause-719933/kube-scheduler id=d1721894-0c45-4656-aeba-9b6207e88d57 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2193f19dd2130ebb6566276a871d2ec65403ea1d5ff68e46aac392f442bf2587
	Oct 06 19:38:53 pause-719933 crio[2053]: time="2025-10-06T19:38:53.762509298Z" level=info msg="Started container" PID=2367 containerID=893d9c10f4bd96fc39fc11cf22e3f42bed5f93fc5a7fc287a9290b2e3ff84dfe description=kube-system/coredns-66bc5c9577-b49dq/coredns id=c08cc043-3cf0-45cd-9f34-23636d3a8add name=/runtime.v1.RuntimeService/StartContainer sandboxID=ff4b890809a10075e9d9b27e7f8d66e7bc70859af64e22eede9ff7b21ceeb2f0
	Oct 06 19:38:53 pause-719933 crio[2053]: time="2025-10-06T19:38:53.766018289Z" level=info msg="Created container 961b41de3573fdda3b8ea4db9c54d7e2815089c28a58832b9720d3c4a66d4d91: kube-system/kube-controller-manager-pause-719933/kube-controller-manager" id=36b90307-27f2-47ff-9418-04c904f6b089 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:38:53 pause-719933 crio[2053]: time="2025-10-06T19:38:53.770023996Z" level=info msg="Starting container: 961b41de3573fdda3b8ea4db9c54d7e2815089c28a58832b9720d3c4a66d4d91" id=0db926ca-0b03-453f-b1c0-170ea6f7518a name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 19:38:53 pause-719933 crio[2053]: time="2025-10-06T19:38:53.771424124Z" level=info msg="Created container fbfdf2dfedc0e45c2f1e89a8c3423338cddc71d3952ca96617352bba0eeaeb18: kube-system/kube-apiserver-pause-719933/kube-apiserver" id=eb2f6582-ac72-4e65-bfab-d466e1115e15 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:38:53 pause-719933 crio[2053]: time="2025-10-06T19:38:53.772378769Z" level=info msg="Starting container: fbfdf2dfedc0e45c2f1e89a8c3423338cddc71d3952ca96617352bba0eeaeb18" id=b3dc9ae8-aa4d-44cc-aca4-a7a622ae4a17 name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 19:38:53 pause-719933 crio[2053]: time="2025-10-06T19:38:53.773589595Z" level=info msg="Started container" PID=2357 containerID=961b41de3573fdda3b8ea4db9c54d7e2815089c28a58832b9720d3c4a66d4d91 description=kube-system/kube-controller-manager-pause-719933/kube-controller-manager id=0db926ca-0b03-453f-b1c0-170ea6f7518a name=/runtime.v1.RuntimeService/StartContainer sandboxID=7c153a4d1fe87b6e19a595ca209e8194d3824439f8b0d712f2ceb2bd05357e3c
	Oct 06 19:38:53 pause-719933 crio[2053]: time="2025-10-06T19:38:53.780482654Z" level=info msg="Started container" PID=2362 containerID=fbfdf2dfedc0e45c2f1e89a8c3423338cddc71d3952ca96617352bba0eeaeb18 description=kube-system/kube-apiserver-pause-719933/kube-apiserver id=b3dc9ae8-aa4d-44cc-aca4-a7a622ae4a17 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b416248c2e2b247c730efff9c94d2e1071c117163e375572506ae8651db1b97b
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.946469756Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.949944541Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.949983384Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.950005743Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.953345693Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.953381033Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.953406009Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.956602714Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.956635149Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.956656613Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.959569706Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.959602076Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.959626199Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.962564654Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.962595382Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	fbfdf2dfedc0e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   20 seconds ago       Running             kube-apiserver            1                   b416248c2e2b2       kube-apiserver-pause-719933            kube-system
	893d9c10f4bd9       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   20 seconds ago       Running             coredns                   1                   ff4b890809a10       coredns-66bc5c9577-b49dq               kube-system
	961b41de3573f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   20 seconds ago       Running             kube-controller-manager   1                   7c153a4d1fe87       kube-controller-manager-pause-719933   kube-system
	8e8330f950aa7       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   20 seconds ago       Running             kube-scheduler            1                   2193f19dd2130       kube-scheduler-pause-719933            kube-system
	aa224c01881ad       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   20 seconds ago       Running             kube-proxy                1                   6f8b6d3f47078       kube-proxy-jq5mn                       kube-system
	bd349138ed2da       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   20 seconds ago       Running             kindnet-cni               1                   01436df0d4ba0       kindnet-g6m52                          kube-system
	b4eb3f8f3e81f       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   20 seconds ago       Running             etcd                      1                   5056a33af39dd       etcd-pause-719933                      kube-system
	6ccbb47882248       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   32 seconds ago       Exited              coredns                   0                   ff4b890809a10       coredns-66bc5c9577-b49dq               kube-system
	611f7ec82bc68       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   6f8b6d3f47078       kube-proxy-jq5mn                       kube-system
	4d6e88c27772c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   01436df0d4ba0       kindnet-g6m52                          kube-system
	70f2389a27f2c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   b416248c2e2b2       kube-apiserver-pause-719933            kube-system
	098520989a4cc       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   2193f19dd2130       kube-scheduler-pause-719933            kube-system
	c7989b7481b36       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   7c153a4d1fe87       kube-controller-manager-pause-719933   kube-system
	dbf88fd449582       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   5056a33af39dd       etcd-pause-719933                      kube-system
	
	
	==> coredns [6ccbb47882248567001f98caa3ed6d08c96bd67cf7e41b581b8e20288b7ba653] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49335 - 22388 "HINFO IN 7626729546313852129.7134203665847368022. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021044208s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [893d9c10f4bd96fc39fc11cf22e3f42bed5f93fc5a7fc287a9290b2e3ff84dfe] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59917 - 64056 "HINFO IN 307417467803846690.3552725584248081553. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.025161058s
	
	
	==> describe nodes <==
	Name:               pause-719933
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-719933
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81
	                    minikube.k8s.io/name=pause-719933
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_06T19_37_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 06 Oct 2025 19:37:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-719933
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 06 Oct 2025 19:39:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 06 Oct 2025 19:38:41 +0000   Mon, 06 Oct 2025 19:37:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 06 Oct 2025 19:38:41 +0000   Mon, 06 Oct 2025 19:37:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 06 Oct 2025 19:38:41 +0000   Mon, 06 Oct 2025 19:37:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 06 Oct 2025 19:38:41 +0000   Mon, 06 Oct 2025 19:38:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-719933
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7935b53324c44557941dec33252d44c4
	  System UUID:                760c6b82-3667-4718-92b4-cf492912ca13
	  Boot ID:                    28123a58-a718-41b5-bac7-da83ff2d3a0d
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-b49dq                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     74s
	  kube-system                 etcd-pause-719933                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         79s
	  kube-system                 kindnet-g6m52                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      74s
	  kube-system                 kube-apiserver-pause-719933             250m (12%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-controller-manager-pause-719933    200m (10%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-proxy-jq5mn                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 kube-scheduler-pause-719933             100m (5%)     0 (0%)      0 (0%)           0 (0%)         80s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 73s   kube-proxy       
	  Normal   Starting                 14s   kube-proxy       
	  Normal   Starting                 79s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 79s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  79s   kubelet          Node pause-719933 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    79s   kubelet          Node pause-719933 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     79s   kubelet          Node pause-719933 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           75s   node-controller  Node pause-719933 event: Registered Node pause-719933 in Controller
	  Normal   NodeReady                32s   kubelet          Node pause-719933 status is now: NodeReady
	  Normal   RegisteredNode           11s   node-controller  Node pause-719933 event: Registered Node pause-719933 in Controller
	
	
	==> dmesg <==
	[Oct 6 19:11] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:12] overlayfs: idmapped layers are currently not supported
	[  +3.608985] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:13] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:14] overlayfs: idmapped layers are currently not supported
	[ +11.752506] hrtimer: interrupt took 8273017 ns
	[Oct 6 19:16] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:20] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:21] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:23] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:26] overlayfs: idmapped layers are currently not supported
	[ +26.009516] overlayfs: idmapped layers are currently not supported
	[  +1.884868] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:28] overlayfs: idmapped layers are currently not supported
	[ +38.864395] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:29] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:30] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:31] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:35] overlayfs: idmapped layers are currently not supported
	[ +30.685217] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:37] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b4eb3f8f3e81f8af021cd9dd5ff7fd72c58ef133553cd056ccab41427cc64ece] <==
	{"level":"warn","ts":"2025-10-06T19:38:57.478029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.496847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.518778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.547900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.567817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.592809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.602793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.620751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.639257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.655859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.673583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.690634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.709848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.726448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.755758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.764602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.779004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.800756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.815594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.833259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.858943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.889342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.907749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.920473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:58.027343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52578","server-name":"","error":"EOF"}
	
	
	==> etcd [dbf88fd449582624356375d9d2477b46834de96fb325ba01fe1444314a62865e] <==
	{"level":"warn","ts":"2025-10-06T19:37:50.555586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:37:50.568739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:37:50.589410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:37:50.615886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:37:50.630290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:37:50.651339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:37:50.744939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34588","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-06T19:38:44.946390Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-06T19:38:44.946444Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-719933","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-06T19:38:44.946542Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-06T19:38:44.946602Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-06T19:38:45.218962Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-06T19:38:45.219033Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"warn","ts":"2025-10-06T19:38:45.219040Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-06T19:38:45.219093Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-06T19:38:45.219103Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-06T19:38:45.219108Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-06T19:38:45.219124Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-06T19:38:45.219185Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-06T19:38:45.219197Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-06T19:38:45.219205Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-06T19:38:45.222920Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-06T19:38:45.223042Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-06T19:38:45.223104Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-06T19:38:45.223113Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-719933","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 19:39:14 up  1:21,  0 user,  load average: 2.05, 2.47, 2.18
	Linux pause-719933 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4d6e88c27772c35c97d8c35c87ce943957683a84df865f52d99dc5d63f21d8a0] <==
	I1006 19:38:00.545967       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1006 19:38:00.550545       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1006 19:38:00.550700       1 main.go:148] setting mtu 1500 for CNI 
	I1006 19:38:00.550712       1 main.go:178] kindnetd IP family: "ipv4"
	I1006 19:38:00.550723       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-06T19:38:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1006 19:38:00.746769       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1006 19:38:00.747806       1 controller.go:381] "Waiting for informer caches to sync"
	I1006 19:38:00.747891       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1006 19:38:00.748025       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1006 19:38:30.747799       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1006 19:38:30.747804       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1006 19:38:30.747904       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1006 19:38:30.747932       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1006 19:38:32.348008       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1006 19:38:32.348048       1 metrics.go:72] Registering metrics
	I1006 19:38:32.348128       1 controller.go:711] "Syncing nftables rules"
	I1006 19:38:40.753660       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1006 19:38:40.753700       1 main.go:301] handling current node
	
	
	==> kindnet [bd349138ed2daf1aa487424d2bc98d409e705c9241944a3c17768fd46f7f8289] <==
	I1006 19:38:53.751239       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1006 19:38:53.751425       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1006 19:38:53.751564       1 main.go:148] setting mtu 1500 for CNI 
	I1006 19:38:53.751580       1 main.go:178] kindnetd IP family: "ipv4"
	I1006 19:38:53.751589       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-06T19:38:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1006 19:38:53.945516       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1006 19:38:53.945600       1 controller.go:381] "Waiting for informer caches to sync"
	I1006 19:38:53.945639       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1006 19:38:53.956790       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1006 19:38:59.559787       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1006 19:38:59.559906       1 metrics.go:72] Registering metrics
	I1006 19:38:59.559993       1 controller.go:711] "Syncing nftables rules"
	I1006 19:39:03.946031       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1006 19:39:03.946118       1 main.go:301] handling current node
	I1006 19:39:13.946412       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1006 19:39:13.946474       1 main.go:301] handling current node
	
	
	==> kube-apiserver [70f2389a27f2cceb23c12fdf02125896dba5d458c676409a98715563aad756f6] <==
	W1006 19:38:44.984789       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.984812       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.984832       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.984841       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.984892       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.984945       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.984965       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.984993       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.984998       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985025       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985053       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985052       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985082       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985103       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985108       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985135       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985162       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985216       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985247       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985306       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985350       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985354       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985380       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985394       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.986061       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [fbfdf2dfedc0e45c2f1e89a8c3423338cddc71d3952ca96617352bba0eeaeb18] <==
	I1006 19:38:59.413614       1 policy_source.go:240] refreshing policies
	I1006 19:38:59.414465       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1006 19:38:59.414563       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1006 19:38:59.414798       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1006 19:38:59.414956       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1006 19:38:59.415185       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1006 19:38:59.415229       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1006 19:38:59.415236       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1006 19:38:59.417270       1 aggregator.go:171] initial CRD sync complete...
	I1006 19:38:59.417299       1 autoregister_controller.go:144] Starting autoregister controller
	I1006 19:38:59.417306       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1006 19:38:59.418865       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1006 19:38:59.449427       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1006 19:38:59.451577       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1006 19:38:59.451648       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1006 19:38:59.459963       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1006 19:38:59.467140       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1006 19:38:59.521014       1 cache.go:39] Caches are synced for autoregister controller
	E1006 19:38:59.522051       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1006 19:38:59.907545       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1006 19:39:01.235202       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1006 19:39:02.668732       1 controller.go:667] quota admission added evaluator for: endpoints
	I1006 19:39:02.814236       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1006 19:39:02.915807       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1006 19:39:03.033030       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [961b41de3573fdda3b8ea4db9c54d7e2815089c28a58832b9720d3c4a66d4d91] <==
	I1006 19:39:02.643781       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1006 19:39:02.643815       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1006 19:39:02.639764       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1006 19:39:02.640949       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1006 19:39:02.640969       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1006 19:39:02.640987       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1006 19:39:02.644861       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1006 19:39:02.644904       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1006 19:39:02.644916       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1006 19:39:02.644927       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1006 19:39:02.640996       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1006 19:39:02.641007       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1006 19:39:02.641019       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1006 19:39:02.641335       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1006 19:39:02.655841       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 19:39:02.656550       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1006 19:39:02.656690       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1006 19:39:02.657287       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1006 19:39:02.659829       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1006 19:39:02.659977       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1006 19:39:02.660087       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-719933"
	I1006 19:39:02.660152       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1006 19:39:02.660736       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1006 19:39:02.660812       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1006 19:39:02.667835       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [c7989b7481b364d561193295584c5dca811b622af9cf4e24341e42e50a91f5fc] <==
	I1006 19:37:58.583472       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1006 19:37:58.583499       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 19:37:58.583514       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1006 19:37:58.583522       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1006 19:37:58.583802       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1006 19:37:58.585157       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1006 19:37:58.585189       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1006 19:37:58.585278       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1006 19:37:58.585379       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1006 19:37:58.586812       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1006 19:37:58.586891       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1006 19:37:58.592163       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1006 19:37:58.592239       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1006 19:37:58.592273       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1006 19:37:58.592279       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1006 19:37:58.592284       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1006 19:37:58.592521       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1006 19:37:58.602318       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-719933" podCIDRs=["10.244.0.0/24"]
	I1006 19:37:58.616725       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1006 19:37:58.632391       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1006 19:37:58.632525       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1006 19:37:58.632691       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-719933"
	I1006 19:37:58.632827       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1006 19:37:58.633066       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1006 19:38:43.639127       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [611f7ec82bc68048be0ca0d31606b1d278412943b79fc8a0c4c33da5805164c5] <==
	I1006 19:38:00.538540       1 server_linux.go:53] "Using iptables proxy"
	I1006 19:38:00.656036       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1006 19:38:00.756215       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1006 19:38:00.756253       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1006 19:38:00.756345       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1006 19:38:00.859570       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 19:38:00.859631       1 server_linux.go:132] "Using iptables Proxier"
	I1006 19:38:00.863428       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1006 19:38:00.863904       1 server.go:527] "Version info" version="v1.34.1"
	I1006 19:38:00.863931       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:38:00.867146       1 config.go:106] "Starting endpoint slice config controller"
	I1006 19:38:00.867167       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1006 19:38:00.867448       1 config.go:200] "Starting service config controller"
	I1006 19:38:00.867462       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1006 19:38:00.867912       1 config.go:403] "Starting serviceCIDR config controller"
	I1006 19:38:00.867927       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1006 19:38:00.868313       1 config.go:309] "Starting node config controller"
	I1006 19:38:00.868329       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1006 19:38:00.868336       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1006 19:38:00.967407       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1006 19:38:00.968556       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1006 19:38:00.968705       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [aa224c01881ad4743b57a626d9ef90f9c1ac21439aa850feca08f658dd0552d9] <==
	I1006 19:38:56.949139       1 server_linux.go:53] "Using iptables proxy"
	I1006 19:38:57.572099       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1006 19:38:59.540364       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1006 19:38:59.540632       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1006 19:38:59.541255       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1006 19:38:59.729295       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 19:38:59.729355       1 server_linux.go:132] "Using iptables Proxier"
	I1006 19:38:59.734201       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1006 19:38:59.734569       1 server.go:527] "Version info" version="v1.34.1"
	I1006 19:38:59.734794       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:38:59.736176       1 config.go:200] "Starting service config controller"
	I1006 19:38:59.736264       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1006 19:38:59.736307       1 config.go:106] "Starting endpoint slice config controller"
	I1006 19:38:59.736354       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1006 19:38:59.736391       1 config.go:403] "Starting serviceCIDR config controller"
	I1006 19:38:59.736424       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1006 19:38:59.737098       1 config.go:309] "Starting node config controller"
	I1006 19:38:59.737177       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1006 19:38:59.737210       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1006 19:38:59.836938       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1006 19:38:59.839400       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1006 19:38:59.839805       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [098520989a4ccf81cd5f027022cd81c910e3d341bf7918c27926e6a914e07feb] <==
	E1006 19:37:52.450460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1006 19:37:52.450563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1006 19:37:52.455516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1006 19:37:52.458264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1006 19:37:52.458417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1006 19:37:52.458495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1006 19:37:52.458592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1006 19:37:52.458686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1006 19:37:52.458772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1006 19:37:52.458855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1006 19:37:52.459951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1006 19:37:52.460069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1006 19:37:52.460119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1006 19:37:52.460198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1006 19:37:52.460292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1006 19:37:52.460374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1006 19:37:52.460628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1006 19:37:52.461048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1006 19:37:53.943275       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:38:44.955887       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1006 19:38:44.955944       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:38:44.956372       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1006 19:38:44.956396       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1006 19:38:44.956411       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1006 19:38:44.956430       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8e8330f950aa7b3752a5fe13d772a7e3fdbbadc6d01761c6972079bdf20a9513] <==
	I1006 19:38:57.282336       1 serving.go:386] Generated self-signed cert in-memory
	I1006 19:38:59.865170       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1006 19:38:59.865286       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:38:59.874357       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1006 19:38:59.874448       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1006 19:38:59.874474       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1006 19:38:59.874511       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1006 19:38:59.876375       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:38:59.876462       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:38:59.877121       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 19:38:59.877184       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 19:38:59.974571       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1006 19:38:59.976980       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:38:59.977716       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.625285    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-g6m52\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="af518d42-83f8-4dc8-95ad-ae6659a36a4b" pod="kube-system/kindnet-g6m52"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.625487    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-b49dq\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="cd7b9c84-825b-4c88-9282-6ab75d1df072" pod="kube-system/coredns-66bc5c9577-b49dq"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: I1006 19:38:53.660326    1299 scope.go:117] "RemoveContainer" containerID="c7989b7481b364d561193295584c5dca811b622af9cf4e24341e42e50a91f5fc"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.661020    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-b49dq\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="cd7b9c84-825b-4c88-9282-6ab75d1df072" pod="kube-system/coredns-66bc5c9577-b49dq"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.661344    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-719933\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="64162cca36cad2a0ca53be81f5ca11cf" pod="kube-system/kube-apiserver-pause-719933"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.661624    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-719933\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="5830075111f19e963836198e0c56d9e1" pod="kube-system/etcd-pause-719933"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.661900    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-719933\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e330089aec367c8728505d7f5d82f715" pod="kube-system/kube-controller-manager-pause-719933"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.662159    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-719933\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="39fb5f3f07e723d10f30087f3606f27d" pod="kube-system/kube-scheduler-pause-719933"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.662417    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jq5mn\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e0bdba86-2eef-494a-a380-06b1e0a60cdf" pod="kube-system/kube-proxy-jq5mn"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.662677    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-g6m52\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="af518d42-83f8-4dc8-95ad-ae6659a36a4b" pod="kube-system/kindnet-g6m52"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: I1006 19:38:53.665881    1299 scope.go:117] "RemoveContainer" containerID="70f2389a27f2cceb23c12fdf02125896dba5d458c676409a98715563aad756f6"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.666552    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-719933\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e330089aec367c8728505d7f5d82f715" pod="kube-system/kube-controller-manager-pause-719933"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.666850    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-719933\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="39fb5f3f07e723d10f30087f3606f27d" pod="kube-system/kube-scheduler-pause-719933"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.667193    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jq5mn\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e0bdba86-2eef-494a-a380-06b1e0a60cdf" pod="kube-system/kube-proxy-jq5mn"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.667476    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-g6m52\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="af518d42-83f8-4dc8-95ad-ae6659a36a4b" pod="kube-system/kindnet-g6m52"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.667760    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-b49dq\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="cd7b9c84-825b-4c88-9282-6ab75d1df072" pod="kube-system/coredns-66bc5c9577-b49dq"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.668041    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-719933\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="64162cca36cad2a0ca53be81f5ca11cf" pod="kube-system/kube-apiserver-pause-719933"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.668313    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-719933\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="5830075111f19e963836198e0c56d9e1" pod="kube-system/etcd-pause-719933"
	Oct 06 19:38:54 pause-719933 kubelet[1299]: W1006 19:38:54.598049    1299 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 06 19:38:58 pause-719933 kubelet[1299]: E1006 19:38:58.896683    1299 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-719933\" is forbidden: User \"system:node:pause-719933\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-719933' and this object" podUID="5830075111f19e963836198e0c56d9e1" pod="kube-system/etcd-pause-719933"
	Oct 06 19:38:58 pause-719933 kubelet[1299]: E1006 19:38:58.897322    1299 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-719933\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-719933' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 06 19:39:04 pause-719933 kubelet[1299]: W1006 19:39:04.614562    1299 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 06 19:39:11 pause-719933 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 06 19:39:11 pause-719933 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 06 19:39:11 pause-719933 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-719933 -n pause-719933
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-719933 -n pause-719933: exit status 2 (484.966461ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-719933 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-719933
helpers_test.go:243: (dbg) docker inspect pause-719933:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dc5194c638ea19cae1dff7e756c2b6f31262614e8f03caac134766af5e2a924c",
	        "Created": "2025-10-06T19:37:30.340615797Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 160441,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T19:37:30.429213461Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/dc5194c638ea19cae1dff7e756c2b6f31262614e8f03caac134766af5e2a924c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dc5194c638ea19cae1dff7e756c2b6f31262614e8f03caac134766af5e2a924c/hostname",
	        "HostsPath": "/var/lib/docker/containers/dc5194c638ea19cae1dff7e756c2b6f31262614e8f03caac134766af5e2a924c/hosts",
	        "LogPath": "/var/lib/docker/containers/dc5194c638ea19cae1dff7e756c2b6f31262614e8f03caac134766af5e2a924c/dc5194c638ea19cae1dff7e756c2b6f31262614e8f03caac134766af5e2a924c-json.log",
	        "Name": "/pause-719933",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-719933:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-719933",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dc5194c638ea19cae1dff7e756c2b6f31262614e8f03caac134766af5e2a924c",
	                "LowerDir": "/var/lib/docker/overlay2/46d0e14401bbf0ee17d31407e7cb2fe2d0480435408468baceef517afeda66bb-init/diff:/var/lib/docker/overlay2/950dad8a7486c25883a9845454dded3949e8f3a53166d005e6749ff210bcab80/diff",
	                "MergedDir": "/var/lib/docker/overlay2/46d0e14401bbf0ee17d31407e7cb2fe2d0480435408468baceef517afeda66bb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/46d0e14401bbf0ee17d31407e7cb2fe2d0480435408468baceef517afeda66bb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/46d0e14401bbf0ee17d31407e7cb2fe2d0480435408468baceef517afeda66bb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-719933",
	                "Source": "/var/lib/docker/volumes/pause-719933/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-719933",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-719933",
	                "name.minikube.sigs.k8s.io": "pause-719933",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "17767d519525d8d8d833ab8a04760ca6ae6b3ecba149a595fc109c7aee27cd9b",
	            "SandboxKey": "/var/run/docker/netns/17767d519525",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33025"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33026"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33029"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33027"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33028"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-719933": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:85:6a:6f:89:e6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "304c0013f70a9044a9185cb587a91cc4afec30622dc12a4648005723fcb6eeec",
	                    "EndpointID": "7a24afa7a956a7e7dc9f94f42039ceb715cec0306d8b21e92b20a41cb7181f02",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-719933",
	                        "dc5194c638ea"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-719933 -n pause-719933
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-719933 -n pause-719933: exit status 2 (326.473937ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-719933 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-719933 logs -n 25: (1.365401226s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-262772 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-262772       │ jenkins │ v1.37.0 │ 06 Oct 25 19:33 UTC │ 06 Oct 25 19:34 UTC │
	│ start   │ -p missing-upgrade-911983 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-911983    │ jenkins │ v1.32.0 │ 06 Oct 25 19:33 UTC │ 06 Oct 25 19:34 UTC │
	│ start   │ -p NoKubernetes-262772 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-262772       │ jenkins │ v1.37.0 │ 06 Oct 25 19:34 UTC │ 06 Oct 25 19:34 UTC │
	│ delete  │ -p NoKubernetes-262772                                                                                                                   │ NoKubernetes-262772       │ jenkins │ v1.37.0 │ 06 Oct 25 19:34 UTC │ 06 Oct 25 19:34 UTC │
	│ start   │ -p NoKubernetes-262772 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-262772       │ jenkins │ v1.37.0 │ 06 Oct 25 19:34 UTC │ 06 Oct 25 19:34 UTC │
	│ ssh     │ -p NoKubernetes-262772 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-262772       │ jenkins │ v1.37.0 │ 06 Oct 25 19:34 UTC │                     │
	│ stop    │ -p NoKubernetes-262772                                                                                                                   │ NoKubernetes-262772       │ jenkins │ v1.37.0 │ 06 Oct 25 19:34 UTC │ 06 Oct 25 19:34 UTC │
	│ start   │ -p NoKubernetes-262772 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-262772       │ jenkins │ v1.37.0 │ 06 Oct 25 19:34 UTC │ 06 Oct 25 19:34 UTC │
	│ start   │ -p missing-upgrade-911983 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-911983    │ jenkins │ v1.37.0 │ 06 Oct 25 19:34 UTC │ 06 Oct 25 19:35 UTC │
	│ ssh     │ -p NoKubernetes-262772 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-262772       │ jenkins │ v1.37.0 │ 06 Oct 25 19:34 UTC │                     │
	│ delete  │ -p NoKubernetes-262772                                                                                                                   │ NoKubernetes-262772       │ jenkins │ v1.37.0 │ 06 Oct 25 19:34 UTC │ 06 Oct 25 19:34 UTC │
	│ start   │ -p kubernetes-upgrade-977990 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-977990 │ jenkins │ v1.37.0 │ 06 Oct 25 19:34 UTC │ 06 Oct 25 19:35 UTC │
	│ delete  │ -p missing-upgrade-911983                                                                                                                │ missing-upgrade-911983    │ jenkins │ v1.37.0 │ 06 Oct 25 19:35 UTC │ 06 Oct 25 19:35 UTC │
	│ stop    │ -p kubernetes-upgrade-977990                                                                                                             │ kubernetes-upgrade-977990 │ jenkins │ v1.37.0 │ 06 Oct 25 19:35 UTC │ 06 Oct 25 19:35 UTC │
	│ start   │ -p stopped-upgrade-360545 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-360545    │ jenkins │ v1.32.0 │ 06 Oct 25 19:35 UTC │ 06 Oct 25 19:36 UTC │
	│ start   │ -p kubernetes-upgrade-977990 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-977990 │ jenkins │ v1.37.0 │ 06 Oct 25 19:35 UTC │                     │
	│ stop    │ stopped-upgrade-360545 stop                                                                                                              │ stopped-upgrade-360545    │ jenkins │ v1.32.0 │ 06 Oct 25 19:36 UTC │ 06 Oct 25 19:36 UTC │
	│ start   │ -p stopped-upgrade-360545 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-360545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:36 UTC │ 06 Oct 25 19:36 UTC │
	│ delete  │ -p stopped-upgrade-360545                                                                                                                │ stopped-upgrade-360545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:36 UTC │ 06 Oct 25 19:36 UTC │
	│ start   │ -p running-upgrade-462878 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-462878    │ jenkins │ v1.32.0 │ 06 Oct 25 19:36 UTC │ 06 Oct 25 19:37 UTC │
	│ start   │ -p running-upgrade-462878 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-462878    │ jenkins │ v1.37.0 │ 06 Oct 25 19:37 UTC │ 06 Oct 25 19:37 UTC │
	│ delete  │ -p running-upgrade-462878                                                                                                                │ running-upgrade-462878    │ jenkins │ v1.37.0 │ 06 Oct 25 19:37 UTC │ 06 Oct 25 19:37 UTC │
	│ start   │ -p pause-719933 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-719933              │ jenkins │ v1.37.0 │ 06 Oct 25 19:37 UTC │ 06 Oct 25 19:38 UTC │
	│ start   │ -p pause-719933 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-719933              │ jenkins │ v1.37.0 │ 06 Oct 25 19:38 UTC │ 06 Oct 25 19:39 UTC │
	│ pause   │ -p pause-719933 --alsologtostderr -v=5                                                                                                   │ pause-719933              │ jenkins │ v1.37.0 │ 06 Oct 25 19:39 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 19:38:43
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 19:38:43.477582  164904 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:38:43.477705  164904 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:38:43.477715  164904 out.go:374] Setting ErrFile to fd 2...
	I1006 19:38:43.477720  164904 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:38:43.477958  164904 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:38:43.478306  164904 out.go:368] Setting JSON to false
	I1006 19:38:43.479256  164904 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4859,"bootTime":1759774665,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1006 19:38:43.479338  164904 start.go:140] virtualization:  
	I1006 19:38:43.484648  164904 out.go:179] * [pause-719933] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 19:38:43.487922  164904 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 19:38:43.487971  164904 notify.go:220] Checking for updates...
	I1006 19:38:43.493786  164904 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 19:38:43.497184  164904 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:38:43.500213  164904 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	I1006 19:38:43.504201  164904 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 19:38:43.507310  164904 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 19:38:43.511671  164904 config.go:182] Loaded profile config "pause-719933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:38:43.512890  164904 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 19:38:43.536607  164904 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 19:38:43.536737  164904 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:38:43.597902  164904 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-06 19:38:43.58827506 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:38:43.598016  164904 docker.go:318] overlay module found
	I1006 19:38:43.601196  164904 out.go:179] * Using the docker driver based on existing profile
	I1006 19:38:43.604016  164904 start.go:304] selected driver: docker
	I1006 19:38:43.604038  164904 start.go:924] validating driver "docker" against &{Name:pause-719933 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-719933 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:38:43.604185  164904 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 19:38:43.604299  164904 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:38:43.682936  164904 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-06 19:38:43.673378079 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:38:43.683413  164904 cni.go:84] Creating CNI manager for ""
	I1006 19:38:43.683476  164904 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:38:43.683523  164904 start.go:348] cluster config:
	{Name:pause-719933 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-719933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:38:43.686987  164904 out.go:179] * Starting "pause-719933" primary control-plane node in "pause-719933" cluster
	I1006 19:38:43.689859  164904 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 19:38:43.692949  164904 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 19:38:43.695959  164904 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:38:43.696018  164904 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1006 19:38:43.696042  164904 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 19:38:43.696050  164904 cache.go:58] Caching tarball of preloaded images
	I1006 19:38:43.696145  164904 preload.go:233] Found /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1006 19:38:43.696156  164904 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 19:38:43.696296  164904 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/pause-719933/config.json ...
	I1006 19:38:43.715646  164904 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 19:38:43.715668  164904 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 19:38:43.715681  164904 cache.go:232] Successfully downloaded all kic artifacts
	I1006 19:38:43.715731  164904 start.go:360] acquireMachinesLock for pause-719933: {Name:mkc41d7470fc9b98864ba4a88dcf841a69abe0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 19:38:43.715791  164904 start.go:364] duration metric: took 36.3µs to acquireMachinesLock for "pause-719933"
	I1006 19:38:43.715816  164904 start.go:96] Skipping create...Using existing machine configuration
	I1006 19:38:43.715828  164904 fix.go:54] fixHost starting: 
	I1006 19:38:43.716086  164904 cli_runner.go:164] Run: docker container inspect pause-719933 --format={{.State.Status}}
	I1006 19:38:43.733099  164904 fix.go:112] recreateIfNeeded on pause-719933: state=Running err=<nil>
	W1006 19:38:43.733129  164904 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 19:38:42.581512  149088 cri.go:89] found id: ""
	I1006 19:38:42.581536  149088 logs.go:282] 0 containers: []
	W1006 19:38:42.581545  149088 logs.go:284] No container was found matching "storage-provisioner"
	I1006 19:38:42.581554  149088 logs.go:123] Gathering logs for kubelet ...
	I1006 19:38:42.581565  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 19:38:42.704247  149088 logs.go:123] Gathering logs for dmesg ...
	I1006 19:38:42.704282  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 19:38:42.719016  149088 logs.go:123] Gathering logs for describe nodes ...
	I1006 19:38:42.719044  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 19:38:42.788175  149088 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 19:38:42.788198  149088 logs.go:123] Gathering logs for kube-apiserver [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10] ...
	I1006 19:38:42.788212  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:38:42.822188  149088 logs.go:123] Gathering logs for kube-controller-manager [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9] ...
	I1006 19:38:42.822216  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:38:42.847197  149088 logs.go:123] Gathering logs for CRI-O ...
	I1006 19:38:42.847224  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 19:38:42.891172  149088 logs.go:123] Gathering logs for container status ...
	I1006 19:38:42.891208  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 19:38:45.430741  149088 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1006 19:38:45.431938  149088 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1006 19:38:45.432502  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 19:38:45.432594  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 19:38:45.484107  149088 cri.go:89] found id: "07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:38:45.484133  149088 cri.go:89] found id: ""
	I1006 19:38:45.484142  149088 logs.go:282] 1 containers: [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10]
	I1006 19:38:45.484202  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:38:45.489158  149088 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 19:38:45.489247  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 19:38:45.529844  149088 cri.go:89] found id: ""
	I1006 19:38:45.529876  149088 logs.go:282] 0 containers: []
	W1006 19:38:45.529885  149088 logs.go:284] No container was found matching "etcd"
	I1006 19:38:45.529891  149088 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 19:38:45.529952  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 19:38:45.560766  149088 cri.go:89] found id: ""
	I1006 19:38:45.560791  149088 logs.go:282] 0 containers: []
	W1006 19:38:45.560807  149088 logs.go:284] No container was found matching "coredns"
	I1006 19:38:45.560814  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 19:38:45.560876  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 19:38:45.589975  149088 cri.go:89] found id: ""
	I1006 19:38:45.590002  149088 logs.go:282] 0 containers: []
	W1006 19:38:45.590012  149088 logs.go:284] No container was found matching "kube-scheduler"
	I1006 19:38:45.590019  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 19:38:45.590125  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 19:38:45.618870  149088 cri.go:89] found id: ""
	I1006 19:38:45.618894  149088 logs.go:282] 0 containers: []
	W1006 19:38:45.618903  149088 logs.go:284] No container was found matching "kube-proxy"
	I1006 19:38:45.618909  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 19:38:45.618972  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 19:38:45.647029  149088 cri.go:89] found id: "fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:38:45.647049  149088 cri.go:89] found id: ""
	I1006 19:38:45.647057  149088 logs.go:282] 1 containers: [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9]
	I1006 19:38:45.647113  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:38:45.650829  149088 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 19:38:45.650903  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 19:38:45.679342  149088 cri.go:89] found id: ""
	I1006 19:38:45.679365  149088 logs.go:282] 0 containers: []
	W1006 19:38:45.679374  149088 logs.go:284] No container was found matching "kindnet"
	I1006 19:38:45.679381  149088 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1006 19:38:45.679440  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1006 19:38:45.705051  149088 cri.go:89] found id: ""
	I1006 19:38:45.705131  149088 logs.go:282] 0 containers: []
	W1006 19:38:45.705147  149088 logs.go:284] No container was found matching "storage-provisioner"
	I1006 19:38:45.705156  149088 logs.go:123] Gathering logs for dmesg ...
	I1006 19:38:45.705168  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 19:38:45.720065  149088 logs.go:123] Gathering logs for describe nodes ...
	I1006 19:38:45.720095  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 19:38:45.797275  149088 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 19:38:45.797338  149088 logs.go:123] Gathering logs for kube-apiserver [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10] ...
	I1006 19:38:45.797360  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:38:45.828598  149088 logs.go:123] Gathering logs for kube-controller-manager [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9] ...
	I1006 19:38:45.828631  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:38:45.858392  149088 logs.go:123] Gathering logs for CRI-O ...
	I1006 19:38:45.858422  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 19:38:45.902970  149088 logs.go:123] Gathering logs for container status ...
	I1006 19:38:45.903006  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 19:38:45.932364  149088 logs.go:123] Gathering logs for kubelet ...
	I1006 19:38:45.932396  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 19:38:43.736384  164904 out.go:252] * Updating the running docker "pause-719933" container ...
	I1006 19:38:43.736438  164904 machine.go:93] provisionDockerMachine start ...
	I1006 19:38:43.736517  164904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-719933
	I1006 19:38:43.753657  164904 main.go:141] libmachine: Using SSH client type: native
	I1006 19:38:43.754013  164904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33025 <nil> <nil>}
	I1006 19:38:43.754031  164904 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 19:38:43.887213  164904 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-719933
	
	I1006 19:38:43.887238  164904 ubuntu.go:182] provisioning hostname "pause-719933"
	I1006 19:38:43.887335  164904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-719933
	I1006 19:38:43.905038  164904 main.go:141] libmachine: Using SSH client type: native
	I1006 19:38:43.905356  164904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33025 <nil> <nil>}
	I1006 19:38:43.905371  164904 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-719933 && echo "pause-719933" | sudo tee /etc/hostname
	I1006 19:38:44.049845  164904 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-719933
	
	I1006 19:38:44.049929  164904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-719933
	I1006 19:38:44.067969  164904 main.go:141] libmachine: Using SSH client type: native
	I1006 19:38:44.068283  164904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33025 <nil> <nil>}
	I1006 19:38:44.068307  164904 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-719933' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-719933/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-719933' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 19:38:44.204018  164904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 19:38:44.204046  164904 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-2540/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-2540/.minikube}
	I1006 19:38:44.204065  164904 ubuntu.go:190] setting up certificates
	I1006 19:38:44.204073  164904 provision.go:84] configureAuth start
	I1006 19:38:44.204130  164904 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-719933
	I1006 19:38:44.221728  164904 provision.go:143] copyHostCerts
	I1006 19:38:44.221800  164904 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem, removing ...
	I1006 19:38:44.221819  164904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem
	I1006 19:38:44.221895  164904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem (1082 bytes)
	I1006 19:38:44.222000  164904 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem, removing ...
	I1006 19:38:44.222011  164904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem
	I1006 19:38:44.222037  164904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem (1123 bytes)
	I1006 19:38:44.222097  164904 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem, removing ...
	I1006 19:38:44.222105  164904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem
	I1006 19:38:44.222128  164904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem (1675 bytes)
	I1006 19:38:44.222178  164904 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem org=jenkins.pause-719933 san=[127.0.0.1 192.168.85.2 localhost minikube pause-719933]
	I1006 19:38:44.609082  164904 provision.go:177] copyRemoteCerts
	I1006 19:38:44.609151  164904 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 19:38:44.609195  164904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-719933
	I1006 19:38:44.627376  164904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33025 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/pause-719933/id_rsa Username:docker}
	I1006 19:38:44.723574  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 19:38:44.743133  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1006 19:38:44.761500  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 19:38:44.779737  164904 provision.go:87] duration metric: took 575.640319ms to configureAuth
	I1006 19:38:44.779806  164904 ubuntu.go:206] setting minikube options for container-runtime
	I1006 19:38:44.780054  164904 config.go:182] Loaded profile config "pause-719933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:38:44.780171  164904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-719933
	I1006 19:38:44.797965  164904 main.go:141] libmachine: Using SSH client type: native
	I1006 19:38:44.798281  164904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33025 <nil> <nil>}
	I1006 19:38:44.798301  164904 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 19:38:50.140091  164904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 19:38:50.140112  164904 machine.go:96] duration metric: took 6.403665399s to provisionDockerMachine
	I1006 19:38:50.140123  164904 start.go:293] postStartSetup for "pause-719933" (driver="docker")
	I1006 19:38:50.140133  164904 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 19:38:50.140213  164904 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 19:38:50.140258  164904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-719933
	I1006 19:38:50.158784  164904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33025 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/pause-719933/id_rsa Username:docker}
	I1006 19:38:50.255891  164904 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 19:38:50.259666  164904 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 19:38:50.259732  164904 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 19:38:50.259751  164904 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/addons for local assets ...
	I1006 19:38:50.259819  164904 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/files for local assets ...
	I1006 19:38:50.259937  164904 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> 43502.pem in /etc/ssl/certs
	I1006 19:38:50.260058  164904 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 19:38:50.268016  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:38:50.285949  164904 start.go:296] duration metric: took 145.811356ms for postStartSetup
	I1006 19:38:50.286039  164904 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 19:38:50.286107  164904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-719933
	I1006 19:38:50.303731  164904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33025 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/pause-719933/id_rsa Username:docker}
	I1006 19:38:50.397121  164904 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 19:38:50.402166  164904 fix.go:56] duration metric: took 6.686336212s for fixHost
	I1006 19:38:50.402190  164904 start.go:83] releasing machines lock for "pause-719933", held for 6.686385812s
	I1006 19:38:50.402264  164904 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-719933
	I1006 19:38:50.421320  164904 ssh_runner.go:195] Run: cat /version.json
	I1006 19:38:50.421362  164904 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 19:38:50.421380  164904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-719933
	I1006 19:38:50.421427  164904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-719933
	I1006 19:38:50.447904  164904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33025 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/pause-719933/id_rsa Username:docker}
	I1006 19:38:50.453267  164904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33025 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/pause-719933/id_rsa Username:docker}
	I1006 19:38:50.629030  164904 ssh_runner.go:195] Run: systemctl --version
	I1006 19:38:50.635676  164904 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 19:38:50.679927  164904 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 19:38:50.684823  164904 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 19:38:50.684889  164904 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 19:38:50.692757  164904 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 19:38:50.692822  164904 start.go:495] detecting cgroup driver to use...
	I1006 19:38:50.692865  164904 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 19:38:50.692917  164904 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 19:38:50.708401  164904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 19:38:50.721398  164904 docker.go:218] disabling cri-docker service (if available) ...
	I1006 19:38:50.721462  164904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 19:38:50.736968  164904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 19:38:50.750488  164904 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 19:38:50.883597  164904 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 19:38:51.024455  164904 docker.go:234] disabling docker service ...
	I1006 19:38:51.024533  164904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 19:38:51.040735  164904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 19:38:51.054865  164904 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 19:38:51.192686  164904 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 19:38:51.323752  164904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 19:38:51.337406  164904 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 19:38:51.351857  164904 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 19:38:51.351932  164904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:38:51.360936  164904 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 19:38:51.361047  164904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:38:51.370073  164904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:38:51.378801  164904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:38:51.387775  164904 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 19:38:51.396313  164904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:38:51.405761  164904 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:38:51.414164  164904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:38:51.423136  164904 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 19:38:51.430757  164904 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 19:38:51.438327  164904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:38:51.568515  164904 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 19:38:51.784930  164904 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 19:38:51.785019  164904 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 19:38:51.792122  164904 start.go:563] Will wait 60s for crictl version
	I1006 19:38:51.792216  164904 ssh_runner.go:195] Run: which crictl
	I1006 19:38:51.797931  164904 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 19:38:51.836906  164904 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 19:38:51.837016  164904 ssh_runner.go:195] Run: crio --version
	I1006 19:38:51.881411  164904 ssh_runner.go:195] Run: crio --version
	I1006 19:38:51.931437  164904 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 19:38:48.553703  149088 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1006 19:38:48.554089  149088 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1006 19:38:48.554135  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 19:38:48.554191  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 19:38:48.580220  149088 cri.go:89] found id: "07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:38:48.580244  149088 cri.go:89] found id: ""
	I1006 19:38:48.580256  149088 logs.go:282] 1 containers: [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10]
	I1006 19:38:48.580340  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:38:48.584419  149088 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 19:38:48.584490  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 19:38:48.612911  149088 cri.go:89] found id: ""
	I1006 19:38:48.612936  149088 logs.go:282] 0 containers: []
	W1006 19:38:48.612945  149088 logs.go:284] No container was found matching "etcd"
	I1006 19:38:48.612951  149088 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 19:38:48.613052  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 19:38:48.646669  149088 cri.go:89] found id: ""
	I1006 19:38:48.646694  149088 logs.go:282] 0 containers: []
	W1006 19:38:48.646703  149088 logs.go:284] No container was found matching "coredns"
	I1006 19:38:48.646710  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 19:38:48.646766  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 19:38:48.672750  149088 cri.go:89] found id: ""
	I1006 19:38:48.672806  149088 logs.go:282] 0 containers: []
	W1006 19:38:48.672815  149088 logs.go:284] No container was found matching "kube-scheduler"
	I1006 19:38:48.672822  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 19:38:48.672884  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 19:38:48.699107  149088 cri.go:89] found id: ""
	I1006 19:38:48.699137  149088 logs.go:282] 0 containers: []
	W1006 19:38:48.699146  149088 logs.go:284] No container was found matching "kube-proxy"
	I1006 19:38:48.699152  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 19:38:48.699228  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 19:38:48.727536  149088 cri.go:89] found id: "fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:38:48.727559  149088 cri.go:89] found id: ""
	I1006 19:38:48.727567  149088 logs.go:282] 1 containers: [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9]
	I1006 19:38:48.727622  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:38:48.731259  149088 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 19:38:48.731357  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 19:38:48.761368  149088 cri.go:89] found id: ""
	I1006 19:38:48.761402  149088 logs.go:282] 0 containers: []
	W1006 19:38:48.761411  149088 logs.go:284] No container was found matching "kindnet"
	I1006 19:38:48.761417  149088 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1006 19:38:48.761476  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1006 19:38:48.788183  149088 cri.go:89] found id: ""
	I1006 19:38:48.788259  149088 logs.go:282] 0 containers: []
	W1006 19:38:48.788283  149088 logs.go:284] No container was found matching "storage-provisioner"
	I1006 19:38:48.788300  149088 logs.go:123] Gathering logs for kubelet ...
	I1006 19:38:48.788312  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 19:38:48.906900  149088 logs.go:123] Gathering logs for dmesg ...
	I1006 19:38:48.906935  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 19:38:48.921944  149088 logs.go:123] Gathering logs for describe nodes ...
	I1006 19:38:48.921973  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 19:38:48.993102  149088 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 19:38:48.993133  149088 logs.go:123] Gathering logs for kube-apiserver [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10] ...
	I1006 19:38:48.993146  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:38:49.025513  149088 logs.go:123] Gathering logs for kube-controller-manager [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9] ...
	I1006 19:38:49.025548  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:38:49.051634  149088 logs.go:123] Gathering logs for CRI-O ...
	I1006 19:38:49.051664  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 19:38:49.095558  149088 logs.go:123] Gathering logs for container status ...
	I1006 19:38:49.095591  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 19:38:51.626832  149088 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1006 19:38:51.627204  149088 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1006 19:38:51.627242  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 19:38:51.627294  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 19:38:51.674791  149088 cri.go:89] found id: "07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:38:51.674809  149088 cri.go:89] found id: ""
	I1006 19:38:51.674817  149088 logs.go:282] 1 containers: [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10]
	I1006 19:38:51.674878  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:38:51.679072  149088 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 19:38:51.679137  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 19:38:51.711667  149088 cri.go:89] found id: ""
	I1006 19:38:51.711687  149088 logs.go:282] 0 containers: []
	W1006 19:38:51.711731  149088 logs.go:284] No container was found matching "etcd"
	I1006 19:38:51.711739  149088 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 19:38:51.711793  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 19:38:51.742183  149088 cri.go:89] found id: ""
	I1006 19:38:51.742204  149088 logs.go:282] 0 containers: []
	W1006 19:38:51.742213  149088 logs.go:284] No container was found matching "coredns"
	I1006 19:38:51.742219  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 19:38:51.742274  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 19:38:51.775009  149088 cri.go:89] found id: ""
	I1006 19:38:51.775030  149088 logs.go:282] 0 containers: []
	W1006 19:38:51.775039  149088 logs.go:284] No container was found matching "kube-scheduler"
	I1006 19:38:51.775044  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 19:38:51.775121  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 19:38:51.813652  149088 cri.go:89] found id: ""
	I1006 19:38:51.813676  149088 logs.go:282] 0 containers: []
	W1006 19:38:51.813684  149088 logs.go:284] No container was found matching "kube-proxy"
	I1006 19:38:51.813691  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 19:38:51.813755  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 19:38:51.847340  149088 cri.go:89] found id: "fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:38:51.847359  149088 cri.go:89] found id: ""
	I1006 19:38:51.847366  149088 logs.go:282] 1 containers: [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9]
	I1006 19:38:51.847421  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:38:51.851828  149088 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 19:38:51.851898  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 19:38:51.884822  149088 cri.go:89] found id: ""
	I1006 19:38:51.884844  149088 logs.go:282] 0 containers: []
	W1006 19:38:51.884853  149088 logs.go:284] No container was found matching "kindnet"
	I1006 19:38:51.884859  149088 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1006 19:38:51.885005  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1006 19:38:51.929191  149088 cri.go:89] found id: ""
	I1006 19:38:51.929215  149088 logs.go:282] 0 containers: []
	W1006 19:38:51.929223  149088 logs.go:284] No container was found matching "storage-provisioner"
	I1006 19:38:51.929244  149088 logs.go:123] Gathering logs for kubelet ...
	I1006 19:38:51.929273  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 19:38:52.069115  149088 logs.go:123] Gathering logs for dmesg ...
	I1006 19:38:52.069163  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 19:38:52.085485  149088 logs.go:123] Gathering logs for describe nodes ...
	I1006 19:38:52.085518  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 19:38:52.197336  149088 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 19:38:52.197358  149088 logs.go:123] Gathering logs for kube-apiserver [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10] ...
	I1006 19:38:52.197371  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:38:52.255472  149088 logs.go:123] Gathering logs for kube-controller-manager [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9] ...
	I1006 19:38:52.255513  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:38:52.316170  149088 logs.go:123] Gathering logs for CRI-O ...
	I1006 19:38:52.316208  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 19:38:52.380847  149088 logs.go:123] Gathering logs for container status ...
	I1006 19:38:52.380898  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 19:38:51.934497  164904 cli_runner.go:164] Run: docker network inspect pause-719933 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:38:51.974693  164904 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1006 19:38:51.978878  164904 kubeadm.go:883] updating cluster {Name:pause-719933 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-719933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 19:38:51.979071  164904 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:38:51.979125  164904 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:38:52.028416  164904 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:38:52.028438  164904 crio.go:433] Images already preloaded, skipping extraction
	I1006 19:38:52.028501  164904 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:38:52.059388  164904 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:38:52.059408  164904 cache_images.go:85] Images are preloaded, skipping loading
	I1006 19:38:52.059417  164904 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1006 19:38:52.059554  164904 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-719933 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-719933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 19:38:52.059633  164904 ssh_runner.go:195] Run: crio config
	I1006 19:38:52.143993  164904 cni.go:84] Creating CNI manager for ""
	I1006 19:38:52.144073  164904 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:38:52.144109  164904 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 19:38:52.144167  164904 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-719933 NodeName:pause-719933 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 19:38:52.144338  164904 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-719933"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 19:38:52.144476  164904 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 19:38:52.154427  164904 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 19:38:52.154540  164904 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 19:38:52.163425  164904 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1006 19:38:52.179207  164904 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 19:38:52.196539  164904 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1006 19:38:52.214653  164904 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1006 19:38:52.219556  164904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:38:52.410566  164904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:38:52.425665  164904 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/pause-719933 for IP: 192.168.85.2
	I1006 19:38:52.425738  164904 certs.go:195] generating shared ca certs ...
	I1006 19:38:52.425768  164904 certs.go:227] acquiring lock for ca certs: {Name:mke29bef1b13829052576090d66d8864f7cbc64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:38:52.425943  164904 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key
	I1006 19:38:52.426018  164904 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key
	I1006 19:38:52.426053  164904 certs.go:257] generating profile certs ...
	I1006 19:38:52.426171  164904 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/pause-719933/client.key
	I1006 19:38:52.426275  164904 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/pause-719933/apiserver.key.cfa34ef1
	I1006 19:38:52.426343  164904 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/pause-719933/proxy-client.key
	I1006 19:38:52.426480  164904 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem (1338 bytes)
	W1006 19:38:52.426534  164904 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350_empty.pem, impossibly tiny 0 bytes
	I1006 19:38:52.426557  164904 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 19:38:52.426613  164904 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem (1082 bytes)
	I1006 19:38:52.426674  164904 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem (1123 bytes)
	I1006 19:38:52.426729  164904 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem (1675 bytes)
	I1006 19:38:52.426798  164904 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:38:52.427488  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 19:38:52.451675  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 19:38:52.471905  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 19:38:52.492044  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 19:38:52.510525  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/pause-719933/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1006 19:38:52.528762  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/pause-719933/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 19:38:52.547559  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/pause-719933/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 19:38:52.566221  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/pause-719933/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 19:38:52.585912  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem --> /usr/share/ca-certificates/4350.pem (1338 bytes)
	I1006 19:38:52.604145  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /usr/share/ca-certificates/43502.pem (1708 bytes)
	I1006 19:38:52.621401  164904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 19:38:52.639058  164904 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 19:38:52.653536  164904 ssh_runner.go:195] Run: openssl version
	I1006 19:38:52.660018  164904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 19:38:52.668556  164904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:38:52.672317  164904 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:38:52.672402  164904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:38:52.713259  164904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 19:38:52.721146  164904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4350.pem && ln -fs /usr/share/ca-certificates/4350.pem /etc/ssl/certs/4350.pem"
	I1006 19:38:52.729226  164904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4350.pem
	I1006 19:38:52.732808  164904 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 18:49 /usr/share/ca-certificates/4350.pem
	I1006 19:38:52.732921  164904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4350.pem
	I1006 19:38:52.773909  164904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4350.pem /etc/ssl/certs/51391683.0"
	I1006 19:38:52.781963  164904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43502.pem && ln -fs /usr/share/ca-certificates/43502.pem /etc/ssl/certs/43502.pem"
	I1006 19:38:52.790340  164904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43502.pem
	I1006 19:38:52.794292  164904 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 18:49 /usr/share/ca-certificates/43502.pem
	I1006 19:38:52.794365  164904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43502.pem
	I1006 19:38:52.837543  164904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43502.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 19:38:52.845567  164904 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 19:38:52.849446  164904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 19:38:52.891121  164904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 19:38:52.931977  164904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 19:38:52.973173  164904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 19:38:53.015648  164904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 19:38:53.060348  164904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 19:38:53.101764  164904 kubeadm.go:400] StartCluster: {Name:pause-719933 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-719933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:38:53.101897  164904 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 19:38:53.101967  164904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 19:38:53.131946  164904 cri.go:89] found id: "6ccbb47882248567001f98caa3ed6d08c96bd67cf7e41b581b8e20288b7ba653"
	I1006 19:38:53.131969  164904 cri.go:89] found id: "611f7ec82bc68048be0ca0d31606b1d278412943b79fc8a0c4c33da5805164c5"
	I1006 19:38:53.131974  164904 cri.go:89] found id: "4d6e88c27772c35c97d8c35c87ce943957683a84df865f52d99dc5d63f21d8a0"
	I1006 19:38:53.131978  164904 cri.go:89] found id: "70f2389a27f2cceb23c12fdf02125896dba5d458c676409a98715563aad756f6"
	I1006 19:38:53.131981  164904 cri.go:89] found id: "098520989a4ccf81cd5f027022cd81c910e3d341bf7918c27926e6a914e07feb"
	I1006 19:38:53.131985  164904 cri.go:89] found id: "c7989b7481b364d561193295584c5dca811b622af9cf4e24341e42e50a91f5fc"
	I1006 19:38:53.131989  164904 cri.go:89] found id: "dbf88fd449582624356375d9d2477b46834de96fb325ba01fe1444314a62865e"
	I1006 19:38:53.131992  164904 cri.go:89] found id: ""
	I1006 19:38:53.132044  164904 ssh_runner.go:195] Run: sudo runc list -f json
	W1006 19:38:53.142937  164904 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:38:53Z" level=error msg="open /run/runc: no such file or directory"
	I1006 19:38:53.143036  164904 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 19:38:53.150964  164904 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 19:38:53.150990  164904 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 19:38:53.151042  164904 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 19:38:53.158365  164904 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 19:38:53.159045  164904 kubeconfig.go:125] found "pause-719933" server: "https://192.168.85.2:8443"
	I1006 19:38:53.159931  164904 kapi.go:59] client config for pause-719933: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-2540/.minikube/profiles/pause-719933/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-2540/.minikube/profiles/pause-719933/client.key", CAFile:"/home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 19:38:53.160418  164904 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1006 19:38:53.160438  164904 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1006 19:38:53.160444  164904 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1006 19:38:53.160449  164904 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1006 19:38:53.160454  164904 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1006 19:38:53.160728  164904 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 19:38:53.169080  164904 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1006 19:38:53.169154  164904 kubeadm.go:601] duration metric: took 18.157283ms to restartPrimaryControlPlane
	I1006 19:38:53.169173  164904 kubeadm.go:402] duration metric: took 67.4146ms to StartCluster
	I1006 19:38:53.169192  164904 settings.go:142] acquiring lock: {Name:mkaade96c66593c653256adaeb0a3029ca60e0c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:38:53.169250  164904 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:38:53.170140  164904 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/kubeconfig: {Name:mkf8cce8f9dace5c4d41967d6ad8df5e03ba53f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:38:53.170371  164904 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 19:38:53.170708  164904 config.go:182] Loaded profile config "pause-719933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:38:53.170759  164904 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 19:38:53.175905  164904 out.go:179] * Verifying Kubernetes components...
	I1006 19:38:53.175905  164904 out.go:179] * Enabled addons: 
	I1006 19:38:53.178704  164904 addons.go:514] duration metric: took 7.927778ms for enable addons: enabled=[]
	I1006 19:38:53.178743  164904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:38:53.314020  164904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:38:53.328642  164904 node_ready.go:35] waiting up to 6m0s for node "pause-719933" to be "Ready" ...
	I1006 19:38:54.945936  149088 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1006 19:38:54.946326  149088 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1006 19:38:54.946365  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 19:38:54.946419  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 19:38:54.994808  149088 cri.go:89] found id: "07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:38:54.994827  149088 cri.go:89] found id: ""
	I1006 19:38:54.994835  149088 logs.go:282] 1 containers: [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10]
	I1006 19:38:54.994898  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:38:55.004013  149088 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 19:38:55.004087  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 19:38:55.074405  149088 cri.go:89] found id: ""
	I1006 19:38:55.074427  149088 logs.go:282] 0 containers: []
	W1006 19:38:55.074435  149088 logs.go:284] No container was found matching "etcd"
	I1006 19:38:55.074442  149088 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 19:38:55.074513  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 19:38:55.139028  149088 cri.go:89] found id: ""
	I1006 19:38:55.139057  149088 logs.go:282] 0 containers: []
	W1006 19:38:55.139066  149088 logs.go:284] No container was found matching "coredns"
	I1006 19:38:55.139072  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 19:38:55.139137  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 19:38:55.182741  149088 cri.go:89] found id: ""
	I1006 19:38:55.182762  149088 logs.go:282] 0 containers: []
	W1006 19:38:55.182771  149088 logs.go:284] No container was found matching "kube-scheduler"
	I1006 19:38:55.182777  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 19:38:55.182844  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 19:38:55.242097  149088 cri.go:89] found id: ""
	I1006 19:38:55.242121  149088 logs.go:282] 0 containers: []
	W1006 19:38:55.242130  149088 logs.go:284] No container was found matching "kube-proxy"
	I1006 19:38:55.242136  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 19:38:55.242195  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 19:38:55.287213  149088 cri.go:89] found id: "fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:38:55.287233  149088 cri.go:89] found id: ""
	I1006 19:38:55.287240  149088 logs.go:282] 1 containers: [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9]
	I1006 19:38:55.287308  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:38:55.296382  149088 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 19:38:55.296520  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 19:38:55.346165  149088 cri.go:89] found id: ""
	I1006 19:38:55.346231  149088 logs.go:282] 0 containers: []
	W1006 19:38:55.346253  149088 logs.go:284] No container was found matching "kindnet"
	I1006 19:38:55.346270  149088 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1006 19:38:55.346364  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1006 19:38:55.398097  149088 cri.go:89] found id: ""
	I1006 19:38:55.398178  149088 logs.go:282] 0 containers: []
	W1006 19:38:55.398200  149088 logs.go:284] No container was found matching "storage-provisioner"
	I1006 19:38:55.398238  149088 logs.go:123] Gathering logs for container status ...
	I1006 19:38:55.398267  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 19:38:55.456319  149088 logs.go:123] Gathering logs for kubelet ...
	I1006 19:38:55.456399  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 19:38:55.641098  149088 logs.go:123] Gathering logs for dmesg ...
	I1006 19:38:55.641133  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 19:38:55.667783  149088 logs.go:123] Gathering logs for describe nodes ...
	I1006 19:38:55.667868  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 19:38:55.760859  149088 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 19:38:55.760884  149088 logs.go:123] Gathering logs for kube-apiserver [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10] ...
	I1006 19:38:55.760902  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:38:55.795615  149088 logs.go:123] Gathering logs for kube-controller-manager [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9] ...
	I1006 19:38:55.795651  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:38:55.846652  149088 logs.go:123] Gathering logs for CRI-O ...
	I1006 19:38:55.846682  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 19:38:58.407328  149088 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1006 19:38:58.407820  149088 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1006 19:38:58.407867  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 19:38:58.407924  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 19:38:58.460197  149088 cri.go:89] found id: "07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:38:58.460223  149088 cri.go:89] found id: ""
	I1006 19:38:58.460232  149088 logs.go:282] 1 containers: [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10]
	I1006 19:38:58.460295  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:38:58.468109  149088 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 19:38:58.468189  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 19:38:58.523028  149088 cri.go:89] found id: ""
	I1006 19:38:58.523056  149088 logs.go:282] 0 containers: []
	W1006 19:38:58.523065  149088 logs.go:284] No container was found matching "etcd"
	I1006 19:38:58.523072  149088 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 19:38:58.523143  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 19:38:58.570867  149088 cri.go:89] found id: ""
	I1006 19:38:58.570894  149088 logs.go:282] 0 containers: []
	W1006 19:38:58.570903  149088 logs.go:284] No container was found matching "coredns"
	I1006 19:38:58.570910  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 19:38:58.570969  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 19:38:58.637154  149088 cri.go:89] found id: ""
	I1006 19:38:58.637175  149088 logs.go:282] 0 containers: []
	W1006 19:38:58.637184  149088 logs.go:284] No container was found matching "kube-scheduler"
	I1006 19:38:58.637190  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 19:38:58.637251  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 19:38:58.677932  149088 cri.go:89] found id: ""
	I1006 19:38:58.677953  149088 logs.go:282] 0 containers: []
	W1006 19:38:58.677961  149088 logs.go:284] No container was found matching "kube-proxy"
	I1006 19:38:58.677968  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 19:38:58.678029  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 19:38:58.722029  149088 cri.go:89] found id: "fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:38:58.722049  149088 cri.go:89] found id: ""
	I1006 19:38:58.722057  149088 logs.go:282] 1 containers: [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9]
	I1006 19:38:58.722116  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:38:58.726350  149088 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 19:38:58.726422  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 19:38:58.763232  149088 cri.go:89] found id: ""
	I1006 19:38:58.763253  149088 logs.go:282] 0 containers: []
	W1006 19:38:58.763261  149088 logs.go:284] No container was found matching "kindnet"
	I1006 19:38:58.763267  149088 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1006 19:38:58.763327  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1006 19:38:58.795852  149088 cri.go:89] found id: ""
	I1006 19:38:58.795929  149088 logs.go:282] 0 containers: []
	W1006 19:38:58.795950  149088 logs.go:284] No container was found matching "storage-provisioner"
	I1006 19:38:58.795974  149088 logs.go:123] Gathering logs for kube-apiserver [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10] ...
	I1006 19:38:58.796012  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:38:58.845457  149088 logs.go:123] Gathering logs for kube-controller-manager [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9] ...
	I1006 19:38:58.845527  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:38:58.903928  149088 logs.go:123] Gathering logs for CRI-O ...
	I1006 19:38:58.903962  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 19:38:58.968567  149088 logs.go:123] Gathering logs for container status ...
	I1006 19:38:58.968608  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 19:38:59.039136  149088 logs.go:123] Gathering logs for kubelet ...
	I1006 19:38:59.039167  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 19:38:59.172379  149088 logs.go:123] Gathering logs for dmesg ...
	I1006 19:38:59.172419  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 19:38:59.195951  149088 logs.go:123] Gathering logs for describe nodes ...
	I1006 19:38:59.195982  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 19:38:59.284957  149088 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 19:39:01.786402  149088 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1006 19:39:01.786820  149088 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1006 19:39:01.786866  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 19:39:01.786922  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 19:39:01.823475  149088 cri.go:89] found id: "07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:39:01.823496  149088 cri.go:89] found id: ""
	I1006 19:39:01.823504  149088 logs.go:282] 1 containers: [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10]
	I1006 19:39:01.823563  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:39:01.827366  149088 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 19:39:01.827450  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 19:39:01.853207  149088 cri.go:89] found id: ""
	I1006 19:39:01.853230  149088 logs.go:282] 0 containers: []
	W1006 19:39:01.853239  149088 logs.go:284] No container was found matching "etcd"
	I1006 19:39:01.853245  149088 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 19:39:01.853304  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 19:39:01.879881  149088 cri.go:89] found id: ""
	I1006 19:39:01.879911  149088 logs.go:282] 0 containers: []
	W1006 19:39:01.879920  149088 logs.go:284] No container was found matching "coredns"
	I1006 19:39:01.879927  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 19:39:01.879988  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 19:39:01.914447  149088 cri.go:89] found id: ""
	I1006 19:39:01.914479  149088 logs.go:282] 0 containers: []
	W1006 19:39:01.914488  149088 logs.go:284] No container was found matching "kube-scheduler"
	I1006 19:39:01.914494  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 19:39:01.914552  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 19:39:01.942254  149088 cri.go:89] found id: ""
	I1006 19:39:01.942275  149088 logs.go:282] 0 containers: []
	W1006 19:39:01.942284  149088 logs.go:284] No container was found matching "kube-proxy"
	I1006 19:39:01.942291  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 19:39:01.942350  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 19:39:01.971016  149088 cri.go:89] found id: "fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:39:01.971034  149088 cri.go:89] found id: ""
	I1006 19:39:01.971042  149088 logs.go:282] 1 containers: [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9]
	I1006 19:39:01.971096  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:39:01.974882  149088 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 19:39:01.974951  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 19:39:02.002724  149088 cri.go:89] found id: ""
	I1006 19:39:02.002747  149088 logs.go:282] 0 containers: []
	W1006 19:39:02.002756  149088 logs.go:284] No container was found matching "kindnet"
	I1006 19:39:02.002763  149088 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1006 19:39:02.002823  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1006 19:39:02.033817  149088 cri.go:89] found id: ""
	I1006 19:39:02.033847  149088 logs.go:282] 0 containers: []
	W1006 19:39:02.033856  149088 logs.go:284] No container was found matching "storage-provisioner"
	I1006 19:39:02.033866  149088 logs.go:123] Gathering logs for container status ...
	I1006 19:39:02.033896  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 19:39:02.079364  149088 logs.go:123] Gathering logs for kubelet ...
	I1006 19:39:02.079446  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 19:39:02.210537  149088 logs.go:123] Gathering logs for dmesg ...
	I1006 19:39:02.210574  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 19:39:02.225619  149088 logs.go:123] Gathering logs for describe nodes ...
	I1006 19:39:02.225650  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 19:39:02.297678  149088 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 19:39:02.297697  149088 logs.go:123] Gathering logs for kube-apiserver [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10] ...
	I1006 19:39:02.297710  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:39:02.334962  149088 logs.go:123] Gathering logs for kube-controller-manager [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9] ...
	I1006 19:39:02.334990  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:39:02.369633  149088 logs.go:123] Gathering logs for CRI-O ...
	I1006 19:39:02.369662  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 19:38:59.467596  164904 node_ready.go:49] node "pause-719933" is "Ready"
	I1006 19:38:59.467622  164904 node_ready.go:38] duration metric: took 6.138950073s for node "pause-719933" to be "Ready" ...
	I1006 19:38:59.467634  164904 api_server.go:52] waiting for apiserver process to appear ...
	I1006 19:38:59.467693  164904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 19:38:59.483641  164904 api_server.go:72] duration metric: took 6.313232444s to wait for apiserver process to appear ...
	I1006 19:38:59.483662  164904 api_server.go:88] waiting for apiserver healthz status ...
	I1006 19:38:59.483681  164904 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1006 19:38:59.522135  164904 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1006 19:38:59.522237  164904 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1006 19:38:59.983776  164904 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1006 19:38:59.996688  164904 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1006 19:38:59.996784  164904 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1006 19:39:00.484547  164904 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1006 19:39:00.496183  164904 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1006 19:39:00.496217  164904 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1006 19:39:00.983836  164904 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1006 19:39:00.992118  164904 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1006 19:39:00.993344  164904 api_server.go:141] control plane version: v1.34.1
	I1006 19:39:00.993373  164904 api_server.go:131] duration metric: took 1.509703027s to wait for apiserver health ...
	I1006 19:39:00.993398  164904 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 19:39:00.997073  164904 system_pods.go:59] 7 kube-system pods found
	I1006 19:39:00.997112  164904 system_pods.go:61] "coredns-66bc5c9577-b49dq" [cd7b9c84-825b-4c88-9282-6ab75d1df072] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:39:00.997121  164904 system_pods.go:61] "etcd-pause-719933" [5c5d8626-929a-4e13-804f-5f7c96f99727] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 19:39:00.997127  164904 system_pods.go:61] "kindnet-g6m52" [af518d42-83f8-4dc8-95ad-ae6659a36a4b] Running
	I1006 19:39:00.997134  164904 system_pods.go:61] "kube-apiserver-pause-719933" [3018ee16-ba2d-4552-9833-63982cb79f6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 19:39:00.997143  164904 system_pods.go:61] "kube-controller-manager-pause-719933" [98dfb5e3-bd9a-45f2-a211-b83d28d1759f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 19:39:00.997153  164904 system_pods.go:61] "kube-proxy-jq5mn" [e0bdba86-2eef-494a-a380-06b1e0a60cdf] Running
	I1006 19:39:00.997159  164904 system_pods.go:61] "kube-scheduler-pause-719933" [2225abd6-ba17-40ca-be55-5176b9cfe17a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 19:39:00.997177  164904 system_pods.go:74] duration metric: took 3.764082ms to wait for pod list to return data ...
	I1006 19:39:00.997186  164904 default_sa.go:34] waiting for default service account to be created ...
	I1006 19:39:00.999869  164904 default_sa.go:45] found service account: "default"
	I1006 19:39:00.999898  164904 default_sa.go:55] duration metric: took 2.703369ms for default service account to be created ...
	I1006 19:39:00.999908  164904 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 19:39:01.002691  164904 system_pods.go:86] 7 kube-system pods found
	I1006 19:39:01.002723  164904 system_pods.go:89] "coredns-66bc5c9577-b49dq" [cd7b9c84-825b-4c88-9282-6ab75d1df072] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:39:01.002757  164904 system_pods.go:89] "etcd-pause-719933" [5c5d8626-929a-4e13-804f-5f7c96f99727] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 19:39:01.002773  164904 system_pods.go:89] "kindnet-g6m52" [af518d42-83f8-4dc8-95ad-ae6659a36a4b] Running
	I1006 19:39:01.002780  164904 system_pods.go:89] "kube-apiserver-pause-719933" [3018ee16-ba2d-4552-9833-63982cb79f6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 19:39:01.002787  164904 system_pods.go:89] "kube-controller-manager-pause-719933" [98dfb5e3-bd9a-45f2-a211-b83d28d1759f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 19:39:01.002795  164904 system_pods.go:89] "kube-proxy-jq5mn" [e0bdba86-2eef-494a-a380-06b1e0a60cdf] Running
	I1006 19:39:01.002803  164904 system_pods.go:89] "kube-scheduler-pause-719933" [2225abd6-ba17-40ca-be55-5176b9cfe17a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 19:39:01.002816  164904 system_pods.go:126] duration metric: took 2.902245ms to wait for k8s-apps to be running ...
	I1006 19:39:01.002846  164904 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 19:39:01.002916  164904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:39:01.029488  164904 system_svc.go:56] duration metric: took 26.633148ms WaitForService to wait for kubelet
	I1006 19:39:01.029512  164904 kubeadm.go:586] duration metric: took 7.859108918s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 19:39:01.029532  164904 node_conditions.go:102] verifying NodePressure condition ...
	I1006 19:39:01.032300  164904 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 19:39:01.032337  164904 node_conditions.go:123] node cpu capacity is 2
	I1006 19:39:01.032351  164904 node_conditions.go:105] duration metric: took 2.812194ms to run NodePressure ...
	I1006 19:39:01.032364  164904 start.go:241] waiting for startup goroutines ...
	I1006 19:39:01.032372  164904 start.go:246] waiting for cluster config update ...
	I1006 19:39:01.032380  164904 start.go:255] writing updated cluster config ...
	I1006 19:39:01.032703  164904 ssh_runner.go:195] Run: rm -f paused
	I1006 19:39:01.036251  164904 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 19:39:01.037016  164904 kapi.go:59] client config for pause-719933: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-2540/.minikube/profiles/pause-719933/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-2540/.minikube/profiles/pause-719933/client.key", CAFile:"/home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 19:39:01.039941  164904 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-b49dq" in "kube-system" namespace to be "Ready" or be gone ...
	W1006 19:39:03.045579  164904 pod_ready.go:104] pod "coredns-66bc5c9577-b49dq" is not "Ready", error: <nil>
	I1006 19:39:04.920578  149088 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1006 19:39:04.921008  149088 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1006 19:39:04.921068  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 19:39:04.921128  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 19:39:04.952306  149088 cri.go:89] found id: "07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:39:04.952331  149088 cri.go:89] found id: ""
	I1006 19:39:04.952350  149088 logs.go:282] 1 containers: [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10]
	I1006 19:39:04.952413  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:39:04.956155  149088 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 19:39:04.956231  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 19:39:04.983490  149088 cri.go:89] found id: ""
	I1006 19:39:04.983514  149088 logs.go:282] 0 containers: []
	W1006 19:39:04.983522  149088 logs.go:284] No container was found matching "etcd"
	I1006 19:39:04.983529  149088 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 19:39:04.983590  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 19:39:05.010329  149088 cri.go:89] found id: ""
	I1006 19:39:05.010352  149088 logs.go:282] 0 containers: []
	W1006 19:39:05.010360  149088 logs.go:284] No container was found matching "coredns"
	I1006 19:39:05.010370  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 19:39:05.010428  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 19:39:05.041247  149088 cri.go:89] found id: ""
	I1006 19:39:05.041274  149088 logs.go:282] 0 containers: []
	W1006 19:39:05.041283  149088 logs.go:284] No container was found matching "kube-scheduler"
	I1006 19:39:05.041289  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 19:39:05.041350  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 19:39:05.071162  149088 cri.go:89] found id: ""
	I1006 19:39:05.071188  149088 logs.go:282] 0 containers: []
	W1006 19:39:05.071197  149088 logs.go:284] No container was found matching "kube-proxy"
	I1006 19:39:05.071203  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 19:39:05.071262  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 19:39:05.098171  149088 cri.go:89] found id: "fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:39:05.098202  149088 cri.go:89] found id: ""
	I1006 19:39:05.098213  149088 logs.go:282] 1 containers: [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9]
	I1006 19:39:05.098272  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:39:05.102425  149088 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 19:39:05.102504  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 19:39:05.130211  149088 cri.go:89] found id: ""
	I1006 19:39:05.130289  149088 logs.go:282] 0 containers: []
	W1006 19:39:05.130312  149088 logs.go:284] No container was found matching "kindnet"
	I1006 19:39:05.130330  149088 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1006 19:39:05.130418  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1006 19:39:05.157818  149088 cri.go:89] found id: ""
	I1006 19:39:05.157842  149088 logs.go:282] 0 containers: []
	W1006 19:39:05.157850  149088 logs.go:284] No container was found matching "storage-provisioner"
	I1006 19:39:05.157858  149088 logs.go:123] Gathering logs for kube-apiserver [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10] ...
	I1006 19:39:05.157877  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:39:05.189016  149088 logs.go:123] Gathering logs for kube-controller-manager [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9] ...
	I1006 19:39:05.189073  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:39:05.217093  149088 logs.go:123] Gathering logs for CRI-O ...
	I1006 19:39:05.217116  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 19:39:05.260125  149088 logs.go:123] Gathering logs for container status ...
	I1006 19:39:05.260157  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 19:39:05.292615  149088 logs.go:123] Gathering logs for kubelet ...
	I1006 19:39:05.292642  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 19:39:05.421925  149088 logs.go:123] Gathering logs for dmesg ...
	I1006 19:39:05.421962  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 19:39:05.437176  149088 logs.go:123] Gathering logs for describe nodes ...
	I1006 19:39:05.437208  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 19:39:05.511755  149088 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1006 19:39:05.047746  164904 pod_ready.go:104] pod "coredns-66bc5c9577-b49dq" is not "Ready", error: <nil>
	I1006 19:39:06.045913  164904 pod_ready.go:94] pod "coredns-66bc5c9577-b49dq" is "Ready"
	I1006 19:39:06.045947  164904 pod_ready.go:86] duration metric: took 5.005978s for pod "coredns-66bc5c9577-b49dq" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:39:06.049561  164904 pod_ready.go:83] waiting for pod "etcd-pause-719933" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:39:07.056430  164904 pod_ready.go:94] pod "etcd-pause-719933" is "Ready"
	I1006 19:39:07.056460  164904 pod_ready.go:86] duration metric: took 1.006867607s for pod "etcd-pause-719933" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:39:07.059170  164904 pod_ready.go:83] waiting for pod "kube-apiserver-pause-719933" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:39:07.564637  164904 pod_ready.go:94] pod "kube-apiserver-pause-719933" is "Ready"
	I1006 19:39:07.564667  164904 pod_ready.go:86] duration metric: took 505.473585ms for pod "kube-apiserver-pause-719933" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:39:07.567229  164904 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-719933" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:39:07.572475  164904 pod_ready.go:94] pod "kube-controller-manager-pause-719933" is "Ready"
	I1006 19:39:07.572503  164904 pod_ready.go:86] duration metric: took 5.246935ms for pod "kube-controller-manager-pause-719933" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:39:07.643384  164904 pod_ready.go:83] waiting for pod "kube-proxy-jq5mn" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:39:08.044119  164904 pod_ready.go:94] pod "kube-proxy-jq5mn" is "Ready"
	I1006 19:39:08.044149  164904 pod_ready.go:86] duration metric: took 400.68726ms for pod "kube-proxy-jq5mn" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:39:08.243530  164904 pod_ready.go:83] waiting for pod "kube-scheduler-pause-719933" in "kube-system" namespace to be "Ready" or be gone ...
	W1006 19:39:10.253158  164904 pod_ready.go:104] pod "kube-scheduler-pause-719933" is not "Ready", error: <nil>
	I1006 19:39:10.748546  164904 pod_ready.go:94] pod "kube-scheduler-pause-719933" is "Ready"
	I1006 19:39:10.748575  164904 pod_ready.go:86] duration metric: took 2.505009264s for pod "kube-scheduler-pause-719933" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:39:10.748587  164904 pod_ready.go:40] duration metric: took 9.712304394s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 19:39:10.801622  164904 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1006 19:39:10.804830  164904 out.go:179] * Done! kubectl is now configured to use "pause-719933" cluster and "default" namespace by default
	I1006 19:39:08.012630  149088 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1006 19:39:08.013169  149088 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1006 19:39:08.013249  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 19:39:08.013339  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 19:39:08.049149  149088 cri.go:89] found id: "07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:39:08.049179  149088 cri.go:89] found id: ""
	I1006 19:39:08.049187  149088 logs.go:282] 1 containers: [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10]
	I1006 19:39:08.049287  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:39:08.053089  149088 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 19:39:08.053205  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 19:39:08.085392  149088 cri.go:89] found id: ""
	I1006 19:39:08.085416  149088 logs.go:282] 0 containers: []
	W1006 19:39:08.085425  149088 logs.go:284] No container was found matching "etcd"
	I1006 19:39:08.085431  149088 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 19:39:08.085506  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 19:39:08.115010  149088 cri.go:89] found id: ""
	I1006 19:39:08.115035  149088 logs.go:282] 0 containers: []
	W1006 19:39:08.115067  149088 logs.go:284] No container was found matching "coredns"
	I1006 19:39:08.115102  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 19:39:08.115190  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 19:39:08.143209  149088 cri.go:89] found id: ""
	I1006 19:39:08.143235  149088 logs.go:282] 0 containers: []
	W1006 19:39:08.143244  149088 logs.go:284] No container was found matching "kube-scheduler"
	I1006 19:39:08.143250  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 19:39:08.143336  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 19:39:08.173467  149088 cri.go:89] found id: ""
	I1006 19:39:08.173490  149088 logs.go:282] 0 containers: []
	W1006 19:39:08.173499  149088 logs.go:284] No container was found matching "kube-proxy"
	I1006 19:39:08.173505  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 19:39:08.173583  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 19:39:08.200847  149088 cri.go:89] found id: "fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:39:08.200869  149088 cri.go:89] found id: ""
	I1006 19:39:08.200877  149088 logs.go:282] 1 containers: [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9]
	I1006 19:39:08.200933  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:39:08.204560  149088 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 19:39:08.204651  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 19:39:08.231899  149088 cri.go:89] found id: ""
	I1006 19:39:08.231961  149088 logs.go:282] 0 containers: []
	W1006 19:39:08.231977  149088 logs.go:284] No container was found matching "kindnet"
	I1006 19:39:08.231984  149088 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1006 19:39:08.232041  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1006 19:39:08.260519  149088 cri.go:89] found id: ""
	I1006 19:39:08.260545  149088 logs.go:282] 0 containers: []
	W1006 19:39:08.260553  149088 logs.go:284] No container was found matching "storage-provisioner"
	I1006 19:39:08.260562  149088 logs.go:123] Gathering logs for kubelet ...
	I1006 19:39:08.260592  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 19:39:08.386024  149088 logs.go:123] Gathering logs for dmesg ...
	I1006 19:39:08.386058  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 19:39:08.401867  149088 logs.go:123] Gathering logs for describe nodes ...
	I1006 19:39:08.401893  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 19:39:08.472165  149088 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 19:39:08.472225  149088 logs.go:123] Gathering logs for kube-apiserver [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10] ...
	I1006 19:39:08.472265  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:39:08.503456  149088 logs.go:123] Gathering logs for kube-controller-manager [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9] ...
	I1006 19:39:08.503483  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:39:08.529010  149088 logs.go:123] Gathering logs for CRI-O ...
	I1006 19:39:08.529036  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 19:39:08.573670  149088 logs.go:123] Gathering logs for container status ...
	I1006 19:39:08.573706  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 19:39:11.107215  149088 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1006 19:39:11.107666  149088 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1006 19:39:11.107741  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 19:39:11.107810  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 19:39:11.145197  149088 cri.go:89] found id: "07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:39:11.145217  149088 cri.go:89] found id: ""
	I1006 19:39:11.145225  149088 logs.go:282] 1 containers: [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10]
	I1006 19:39:11.145282  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:39:11.152129  149088 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 19:39:11.152204  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 19:39:11.210048  149088 cri.go:89] found id: ""
	I1006 19:39:11.210069  149088 logs.go:282] 0 containers: []
	W1006 19:39:11.210077  149088 logs.go:284] No container was found matching "etcd"
	I1006 19:39:11.210084  149088 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 19:39:11.210143  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 19:39:11.254430  149088 cri.go:89] found id: ""
	I1006 19:39:11.254511  149088 logs.go:282] 0 containers: []
	W1006 19:39:11.254535  149088 logs.go:284] No container was found matching "coredns"
	I1006 19:39:11.254556  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 19:39:11.254651  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 19:39:11.287344  149088 cri.go:89] found id: ""
	I1006 19:39:11.287365  149088 logs.go:282] 0 containers: []
	W1006 19:39:11.287373  149088 logs.go:284] No container was found matching "kube-scheduler"
	I1006 19:39:11.287380  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 19:39:11.287441  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 19:39:11.324194  149088 cri.go:89] found id: ""
	I1006 19:39:11.324216  149088 logs.go:282] 0 containers: []
	W1006 19:39:11.324224  149088 logs.go:284] No container was found matching "kube-proxy"
	I1006 19:39:11.324230  149088 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 19:39:11.324288  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 19:39:11.356109  149088 cri.go:89] found id: "fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:39:11.356133  149088 cri.go:89] found id: ""
	I1006 19:39:11.356145  149088 logs.go:282] 1 containers: [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9]
	I1006 19:39:11.356214  149088 ssh_runner.go:195] Run: which crictl
	I1006 19:39:11.359980  149088 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 19:39:11.360048  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 19:39:11.384975  149088 cri.go:89] found id: ""
	I1006 19:39:11.384996  149088 logs.go:282] 0 containers: []
	W1006 19:39:11.385005  149088 logs.go:284] No container was found matching "kindnet"
	I1006 19:39:11.385011  149088 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1006 19:39:11.385073  149088 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1006 19:39:11.418804  149088 cri.go:89] found id: ""
	I1006 19:39:11.418828  149088 logs.go:282] 0 containers: []
	W1006 19:39:11.418837  149088 logs.go:284] No container was found matching "storage-provisioner"
	I1006 19:39:11.418847  149088 logs.go:123] Gathering logs for kubelet ...
	I1006 19:39:11.418892  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 19:39:11.554979  149088 logs.go:123] Gathering logs for dmesg ...
	I1006 19:39:11.555018  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 19:39:11.570301  149088 logs.go:123] Gathering logs for describe nodes ...
	I1006 19:39:11.570331  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1006 19:39:11.637473  149088 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1006 19:39:11.637494  149088 logs.go:123] Gathering logs for kube-apiserver [07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10] ...
	I1006 19:39:11.637514  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 07dea3b6cde9c613d98253cb3ed5ee9d7d7d2f42df04ecc0d75b1416f36a5e10"
	I1006 19:39:11.675231  149088 logs.go:123] Gathering logs for kube-controller-manager [fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9] ...
	I1006 19:39:11.675263  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fed5e38bb6e459dbe7b3401aaeaae734d931bbcbef3976801a8de404a8a176d9"
	I1006 19:39:11.706873  149088 logs.go:123] Gathering logs for CRI-O ...
	I1006 19:39:11.706909  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 19:39:11.750738  149088 logs.go:123] Gathering logs for container status ...
	I1006 19:39:11.750771  149088 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	
	==> CRI-O <==
	Oct 06 19:38:53 pause-719933 crio[2053]: time="2025-10-06T19:38:53.759410966Z" level=info msg="Created container 893d9c10f4bd96fc39fc11cf22e3f42bed5f93fc5a7fc287a9290b2e3ff84dfe: kube-system/coredns-66bc5c9577-b49dq/coredns" id=500acf4b-9bb9-44a8-87f8-8486ba3b83cd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:38:53 pause-719933 crio[2053]: time="2025-10-06T19:38:53.76021247Z" level=info msg="Starting container: 893d9c10f4bd96fc39fc11cf22e3f42bed5f93fc5a7fc287a9290b2e3ff84dfe" id=c08cc043-3cf0-45cd-9f34-23636d3a8add name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 19:38:53 pause-719933 crio[2053]: time="2025-10-06T19:38:53.76104659Z" level=info msg="Started container" PID=2348 containerID=8e8330f950aa7b3752a5fe13d772a7e3fdbbadc6d01761c6972079bdf20a9513 description=kube-system/kube-scheduler-pause-719933/kube-scheduler id=d1721894-0c45-4656-aeba-9b6207e88d57 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2193f19dd2130ebb6566276a871d2ec65403ea1d5ff68e46aac392f442bf2587
	Oct 06 19:38:53 pause-719933 crio[2053]: time="2025-10-06T19:38:53.762509298Z" level=info msg="Started container" PID=2367 containerID=893d9c10f4bd96fc39fc11cf22e3f42bed5f93fc5a7fc287a9290b2e3ff84dfe description=kube-system/coredns-66bc5c9577-b49dq/coredns id=c08cc043-3cf0-45cd-9f34-23636d3a8add name=/runtime.v1.RuntimeService/StartContainer sandboxID=ff4b890809a10075e9d9b27e7f8d66e7bc70859af64e22eede9ff7b21ceeb2f0
	Oct 06 19:38:53 pause-719933 crio[2053]: time="2025-10-06T19:38:53.766018289Z" level=info msg="Created container 961b41de3573fdda3b8ea4db9c54d7e2815089c28a58832b9720d3c4a66d4d91: kube-system/kube-controller-manager-pause-719933/kube-controller-manager" id=36b90307-27f2-47ff-9418-04c904f6b089 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:38:53 pause-719933 crio[2053]: time="2025-10-06T19:38:53.770023996Z" level=info msg="Starting container: 961b41de3573fdda3b8ea4db9c54d7e2815089c28a58832b9720d3c4a66d4d91" id=0db926ca-0b03-453f-b1c0-170ea6f7518a name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 19:38:53 pause-719933 crio[2053]: time="2025-10-06T19:38:53.771424124Z" level=info msg="Created container fbfdf2dfedc0e45c2f1e89a8c3423338cddc71d3952ca96617352bba0eeaeb18: kube-system/kube-apiserver-pause-719933/kube-apiserver" id=eb2f6582-ac72-4e65-bfab-d466e1115e15 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:38:53 pause-719933 crio[2053]: time="2025-10-06T19:38:53.772378769Z" level=info msg="Starting container: fbfdf2dfedc0e45c2f1e89a8c3423338cddc71d3952ca96617352bba0eeaeb18" id=b3dc9ae8-aa4d-44cc-aca4-a7a622ae4a17 name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 19:38:53 pause-719933 crio[2053]: time="2025-10-06T19:38:53.773589595Z" level=info msg="Started container" PID=2357 containerID=961b41de3573fdda3b8ea4db9c54d7e2815089c28a58832b9720d3c4a66d4d91 description=kube-system/kube-controller-manager-pause-719933/kube-controller-manager id=0db926ca-0b03-453f-b1c0-170ea6f7518a name=/runtime.v1.RuntimeService/StartContainer sandboxID=7c153a4d1fe87b6e19a595ca209e8194d3824439f8b0d712f2ceb2bd05357e3c
	Oct 06 19:38:53 pause-719933 crio[2053]: time="2025-10-06T19:38:53.780482654Z" level=info msg="Started container" PID=2362 containerID=fbfdf2dfedc0e45c2f1e89a8c3423338cddc71d3952ca96617352bba0eeaeb18 description=kube-system/kube-apiserver-pause-719933/kube-apiserver id=b3dc9ae8-aa4d-44cc-aca4-a7a622ae4a17 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b416248c2e2b247c730efff9c94d2e1071c117163e375572506ae8651db1b97b
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.946469756Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.949944541Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.949983384Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.950005743Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.953345693Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.953381033Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.953406009Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.956602714Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.956635149Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.956656613Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.959569706Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.959602076Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.959626199Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.962564654Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:39:03 pause-719933 crio[2053]: time="2025-10-06T19:39:03.962595382Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	fbfdf2dfedc0e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   22 seconds ago       Running             kube-apiserver            1                   b416248c2e2b2       kube-apiserver-pause-719933            kube-system
	893d9c10f4bd9       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   22 seconds ago       Running             coredns                   1                   ff4b890809a10       coredns-66bc5c9577-b49dq               kube-system
	961b41de3573f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   22 seconds ago       Running             kube-controller-manager   1                   7c153a4d1fe87       kube-controller-manager-pause-719933   kube-system
	8e8330f950aa7       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   22 seconds ago       Running             kube-scheduler            1                   2193f19dd2130       kube-scheduler-pause-719933            kube-system
	aa224c01881ad       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   22 seconds ago       Running             kube-proxy                1                   6f8b6d3f47078       kube-proxy-jq5mn                       kube-system
	bd349138ed2da       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   22 seconds ago       Running             kindnet-cni               1                   01436df0d4ba0       kindnet-g6m52                          kube-system
	b4eb3f8f3e81f       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   22 seconds ago       Running             etcd                      1                   5056a33af39dd       etcd-pause-719933                      kube-system
	6ccbb47882248       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   34 seconds ago       Exited              coredns                   0                   ff4b890809a10       coredns-66bc5c9577-b49dq               kube-system
	611f7ec82bc68       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   6f8b6d3f47078       kube-proxy-jq5mn                       kube-system
	4d6e88c27772c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   01436df0d4ba0       kindnet-g6m52                          kube-system
	70f2389a27f2c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   b416248c2e2b2       kube-apiserver-pause-719933            kube-system
	098520989a4cc       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   2193f19dd2130       kube-scheduler-pause-719933            kube-system
	c7989b7481b36       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   7c153a4d1fe87       kube-controller-manager-pause-719933   kube-system
	dbf88fd449582       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   5056a33af39dd       etcd-pause-719933                      kube-system
	
	
	==> coredns [6ccbb47882248567001f98caa3ed6d08c96bd67cf7e41b581b8e20288b7ba653] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49335 - 22388 "HINFO IN 7626729546313852129.7134203665847368022. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021044208s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [893d9c10f4bd96fc39fc11cf22e3f42bed5f93fc5a7fc287a9290b2e3ff84dfe] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59917 - 64056 "HINFO IN 307417467803846690.3552725584248081553. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.025161058s
	
	
	==> describe nodes <==
	Name:               pause-719933
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-719933
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81
	                    minikube.k8s.io/name=pause-719933
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_06T19_37_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 06 Oct 2025 19:37:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-719933
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 06 Oct 2025 19:39:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 06 Oct 2025 19:38:41 +0000   Mon, 06 Oct 2025 19:37:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 06 Oct 2025 19:38:41 +0000   Mon, 06 Oct 2025 19:37:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 06 Oct 2025 19:38:41 +0000   Mon, 06 Oct 2025 19:37:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 06 Oct 2025 19:38:41 +0000   Mon, 06 Oct 2025 19:38:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-719933
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7935b53324c44557941dec33252d44c4
	  System UUID:                760c6b82-3667-4718-92b4-cf492912ca13
	  Boot ID:                    28123a58-a718-41b5-bac7-da83ff2d3a0d
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-b49dq                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     77s
	  kube-system                 etcd-pause-719933                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         82s
	  kube-system                 kindnet-g6m52                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      77s
	  kube-system                 kube-apiserver-pause-719933             250m (12%)    0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-controller-manager-pause-719933    200m (10%)    0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-proxy-jq5mn                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-scheduler-pause-719933             100m (5%)     0 (0%)      0 (0%)           0 (0%)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 75s   kube-proxy       
	  Normal   Starting                 16s   kube-proxy       
	  Normal   Starting                 82s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 82s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  82s   kubelet          Node pause-719933 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    82s   kubelet          Node pause-719933 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     82s   kubelet          Node pause-719933 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           78s   node-controller  Node pause-719933 event: Registered Node pause-719933 in Controller
	  Normal   NodeReady                35s   kubelet          Node pause-719933 status is now: NodeReady
	  Normal   RegisteredNode           14s   node-controller  Node pause-719933 event: Registered Node pause-719933 in Controller
	
	
	==> dmesg <==
	[Oct 6 19:11] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:12] overlayfs: idmapped layers are currently not supported
	[  +3.608985] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:13] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:14] overlayfs: idmapped layers are currently not supported
	[ +11.752506] hrtimer: interrupt took 8273017 ns
	[Oct 6 19:16] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:20] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:21] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:23] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:26] overlayfs: idmapped layers are currently not supported
	[ +26.009516] overlayfs: idmapped layers are currently not supported
	[  +1.884868] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:28] overlayfs: idmapped layers are currently not supported
	[ +38.864395] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:29] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:30] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:31] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:35] overlayfs: idmapped layers are currently not supported
	[ +30.685217] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:37] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b4eb3f8f3e81f8af021cd9dd5ff7fd72c58ef133553cd056ccab41427cc64ece] <==
	{"level":"warn","ts":"2025-10-06T19:38:57.478029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.496847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.518778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.547900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.567817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.592809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.602793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.620751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.639257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.655859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.673583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.690634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.709848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.726448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.755758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.764602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.779004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.800756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.815594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.833259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.858943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.889342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.907749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:57.920473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:38:58.027343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52578","server-name":"","error":"EOF"}
	
	
	==> etcd [dbf88fd449582624356375d9d2477b46834de96fb325ba01fe1444314a62865e] <==
	{"level":"warn","ts":"2025-10-06T19:37:50.555586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:37:50.568739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:37:50.589410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:37:50.615886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:37:50.630290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:37:50.651339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:37:50.744939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34588","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-06T19:38:44.946390Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-06T19:38:44.946444Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-719933","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-06T19:38:44.946542Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-06T19:38:44.946602Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-06T19:38:45.218962Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-06T19:38:45.219033Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"warn","ts":"2025-10-06T19:38:45.219040Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-06T19:38:45.219093Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-06T19:38:45.219103Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-06T19:38:45.219108Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-06T19:38:45.219124Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-06T19:38:45.219185Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-06T19:38:45.219197Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-06T19:38:45.219205Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-06T19:38:45.222920Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-06T19:38:45.223042Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-06T19:38:45.223104Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-06T19:38:45.223113Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-719933","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 19:39:16 up  1:21,  0 user,  load average: 2.05, 2.47, 2.18
	Linux pause-719933 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4d6e88c27772c35c97d8c35c87ce943957683a84df865f52d99dc5d63f21d8a0] <==
	I1006 19:38:00.545967       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1006 19:38:00.550545       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1006 19:38:00.550700       1 main.go:148] setting mtu 1500 for CNI 
	I1006 19:38:00.550712       1 main.go:178] kindnetd IP family: "ipv4"
	I1006 19:38:00.550723       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-06T19:38:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1006 19:38:00.746769       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1006 19:38:00.747806       1 controller.go:381] "Waiting for informer caches to sync"
	I1006 19:38:00.747891       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1006 19:38:00.748025       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1006 19:38:30.747799       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1006 19:38:30.747804       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1006 19:38:30.747904       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1006 19:38:30.747932       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1006 19:38:32.348008       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1006 19:38:32.348048       1 metrics.go:72] Registering metrics
	I1006 19:38:32.348128       1 controller.go:711] "Syncing nftables rules"
	I1006 19:38:40.753660       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1006 19:38:40.753700       1 main.go:301] handling current node
	
	
	==> kindnet [bd349138ed2daf1aa487424d2bc98d409e705c9241944a3c17768fd46f7f8289] <==
	I1006 19:38:53.751239       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1006 19:38:53.751425       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1006 19:38:53.751564       1 main.go:148] setting mtu 1500 for CNI 
	I1006 19:38:53.751580       1 main.go:178] kindnetd IP family: "ipv4"
	I1006 19:38:53.751589       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-06T19:38:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1006 19:38:53.945516       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1006 19:38:53.945600       1 controller.go:381] "Waiting for informer caches to sync"
	I1006 19:38:53.945639       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1006 19:38:53.956790       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1006 19:38:59.559787       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1006 19:38:59.559906       1 metrics.go:72] Registering metrics
	I1006 19:38:59.559993       1 controller.go:711] "Syncing nftables rules"
	I1006 19:39:03.946031       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1006 19:39:03.946118       1 main.go:301] handling current node
	I1006 19:39:13.946412       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1006 19:39:13.946474       1 main.go:301] handling current node
	
	
	==> kube-apiserver [70f2389a27f2cceb23c12fdf02125896dba5d458c676409a98715563aad756f6] <==
	W1006 19:38:44.984789       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.984812       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.984832       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.984841       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.984892       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.984945       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.984965       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.984993       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.984998       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985025       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985053       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985052       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985082       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985103       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985108       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985135       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985162       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985216       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985247       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985306       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985350       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985354       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985380       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.985394       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 19:38:44.986061       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [fbfdf2dfedc0e45c2f1e89a8c3423338cddc71d3952ca96617352bba0eeaeb18] <==
	I1006 19:38:59.413614       1 policy_source.go:240] refreshing policies
	I1006 19:38:59.414465       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1006 19:38:59.414563       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1006 19:38:59.414798       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1006 19:38:59.414956       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1006 19:38:59.415185       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1006 19:38:59.415229       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1006 19:38:59.415236       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1006 19:38:59.417270       1 aggregator.go:171] initial CRD sync complete...
	I1006 19:38:59.417299       1 autoregister_controller.go:144] Starting autoregister controller
	I1006 19:38:59.417306       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1006 19:38:59.418865       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1006 19:38:59.449427       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1006 19:38:59.451577       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1006 19:38:59.451648       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1006 19:38:59.459963       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1006 19:38:59.467140       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1006 19:38:59.521014       1 cache.go:39] Caches are synced for autoregister controller
	E1006 19:38:59.522051       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1006 19:38:59.907545       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1006 19:39:01.235202       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1006 19:39:02.668732       1 controller.go:667] quota admission added evaluator for: endpoints
	I1006 19:39:02.814236       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1006 19:39:02.915807       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1006 19:39:03.033030       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [961b41de3573fdda3b8ea4db9c54d7e2815089c28a58832b9720d3c4a66d4d91] <==
	I1006 19:39:02.643781       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1006 19:39:02.643815       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1006 19:39:02.639764       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1006 19:39:02.640949       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1006 19:39:02.640969       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1006 19:39:02.640987       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1006 19:39:02.644861       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1006 19:39:02.644904       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1006 19:39:02.644916       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1006 19:39:02.644927       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1006 19:39:02.640996       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1006 19:39:02.641007       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1006 19:39:02.641019       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1006 19:39:02.641335       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1006 19:39:02.655841       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 19:39:02.656550       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1006 19:39:02.656690       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1006 19:39:02.657287       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1006 19:39:02.659829       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1006 19:39:02.659977       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1006 19:39:02.660087       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-719933"
	I1006 19:39:02.660152       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1006 19:39:02.660736       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1006 19:39:02.660812       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1006 19:39:02.667835       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [c7989b7481b364d561193295584c5dca811b622af9cf4e24341e42e50a91f5fc] <==
	I1006 19:37:58.583472       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1006 19:37:58.583499       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 19:37:58.583514       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1006 19:37:58.583522       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1006 19:37:58.583802       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1006 19:37:58.585157       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1006 19:37:58.585189       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1006 19:37:58.585278       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1006 19:37:58.585379       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1006 19:37:58.586812       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1006 19:37:58.586891       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1006 19:37:58.592163       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1006 19:37:58.592239       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1006 19:37:58.592273       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1006 19:37:58.592279       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1006 19:37:58.592284       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1006 19:37:58.592521       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1006 19:37:58.602318       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-719933" podCIDRs=["10.244.0.0/24"]
	I1006 19:37:58.616725       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1006 19:37:58.632391       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1006 19:37:58.632525       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1006 19:37:58.632691       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-719933"
	I1006 19:37:58.632827       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1006 19:37:58.633066       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1006 19:38:43.639127       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [611f7ec82bc68048be0ca0d31606b1d278412943b79fc8a0c4c33da5805164c5] <==
	I1006 19:38:00.538540       1 server_linux.go:53] "Using iptables proxy"
	I1006 19:38:00.656036       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1006 19:38:00.756215       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1006 19:38:00.756253       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1006 19:38:00.756345       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1006 19:38:00.859570       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 19:38:00.859631       1 server_linux.go:132] "Using iptables Proxier"
	I1006 19:38:00.863428       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1006 19:38:00.863904       1 server.go:527] "Version info" version="v1.34.1"
	I1006 19:38:00.863931       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:38:00.867146       1 config.go:106] "Starting endpoint slice config controller"
	I1006 19:38:00.867167       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1006 19:38:00.867448       1 config.go:200] "Starting service config controller"
	I1006 19:38:00.867462       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1006 19:38:00.867912       1 config.go:403] "Starting serviceCIDR config controller"
	I1006 19:38:00.867927       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1006 19:38:00.868313       1 config.go:309] "Starting node config controller"
	I1006 19:38:00.868329       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1006 19:38:00.868336       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1006 19:38:00.967407       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1006 19:38:00.968556       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1006 19:38:00.968705       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [aa224c01881ad4743b57a626d9ef90f9c1ac21439aa850feca08f658dd0552d9] <==
	I1006 19:38:56.949139       1 server_linux.go:53] "Using iptables proxy"
	I1006 19:38:57.572099       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1006 19:38:59.540364       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1006 19:38:59.540632       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1006 19:38:59.541255       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1006 19:38:59.729295       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 19:38:59.729355       1 server_linux.go:132] "Using iptables Proxier"
	I1006 19:38:59.734201       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1006 19:38:59.734569       1 server.go:527] "Version info" version="v1.34.1"
	I1006 19:38:59.734794       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:38:59.736176       1 config.go:200] "Starting service config controller"
	I1006 19:38:59.736264       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1006 19:38:59.736307       1 config.go:106] "Starting endpoint slice config controller"
	I1006 19:38:59.736354       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1006 19:38:59.736391       1 config.go:403] "Starting serviceCIDR config controller"
	I1006 19:38:59.736424       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1006 19:38:59.737098       1 config.go:309] "Starting node config controller"
	I1006 19:38:59.737177       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1006 19:38:59.737210       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1006 19:38:59.836938       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1006 19:38:59.839400       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1006 19:38:59.839805       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [098520989a4ccf81cd5f027022cd81c910e3d341bf7918c27926e6a914e07feb] <==
	E1006 19:37:52.450460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1006 19:37:52.450563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1006 19:37:52.455516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1006 19:37:52.458264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1006 19:37:52.458417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1006 19:37:52.458495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1006 19:37:52.458592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1006 19:37:52.458686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1006 19:37:52.458772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1006 19:37:52.458855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1006 19:37:52.459951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1006 19:37:52.460069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1006 19:37:52.460119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1006 19:37:52.460198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1006 19:37:52.460292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1006 19:37:52.460374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1006 19:37:52.460628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1006 19:37:52.461048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1006 19:37:53.943275       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:38:44.955887       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1006 19:38:44.955944       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:38:44.956372       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1006 19:38:44.956396       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1006 19:38:44.956411       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1006 19:38:44.956430       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8e8330f950aa7b3752a5fe13d772a7e3fdbbadc6d01761c6972079bdf20a9513] <==
	I1006 19:38:57.282336       1 serving.go:386] Generated self-signed cert in-memory
	I1006 19:38:59.865170       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1006 19:38:59.865286       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:38:59.874357       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1006 19:38:59.874448       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1006 19:38:59.874474       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1006 19:38:59.874511       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1006 19:38:59.876375       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:38:59.876462       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:38:59.877121       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 19:38:59.877184       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 19:38:59.974571       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1006 19:38:59.976980       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:38:59.977716       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.625285    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-g6m52\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="af518d42-83f8-4dc8-95ad-ae6659a36a4b" pod="kube-system/kindnet-g6m52"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.625487    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-b49dq\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="cd7b9c84-825b-4c88-9282-6ab75d1df072" pod="kube-system/coredns-66bc5c9577-b49dq"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: I1006 19:38:53.660326    1299 scope.go:117] "RemoveContainer" containerID="c7989b7481b364d561193295584c5dca811b622af9cf4e24341e42e50a91f5fc"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.661020    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-b49dq\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="cd7b9c84-825b-4c88-9282-6ab75d1df072" pod="kube-system/coredns-66bc5c9577-b49dq"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.661344    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-719933\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="64162cca36cad2a0ca53be81f5ca11cf" pod="kube-system/kube-apiserver-pause-719933"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.661624    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-719933\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="5830075111f19e963836198e0c56d9e1" pod="kube-system/etcd-pause-719933"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.661900    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-719933\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e330089aec367c8728505d7f5d82f715" pod="kube-system/kube-controller-manager-pause-719933"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.662159    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-719933\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="39fb5f3f07e723d10f30087f3606f27d" pod="kube-system/kube-scheduler-pause-719933"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.662417    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jq5mn\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e0bdba86-2eef-494a-a380-06b1e0a60cdf" pod="kube-system/kube-proxy-jq5mn"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.662677    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-g6m52\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="af518d42-83f8-4dc8-95ad-ae6659a36a4b" pod="kube-system/kindnet-g6m52"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: I1006 19:38:53.665881    1299 scope.go:117] "RemoveContainer" containerID="70f2389a27f2cceb23c12fdf02125896dba5d458c676409a98715563aad756f6"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.666552    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-719933\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e330089aec367c8728505d7f5d82f715" pod="kube-system/kube-controller-manager-pause-719933"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.666850    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-719933\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="39fb5f3f07e723d10f30087f3606f27d" pod="kube-system/kube-scheduler-pause-719933"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.667193    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jq5mn\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e0bdba86-2eef-494a-a380-06b1e0a60cdf" pod="kube-system/kube-proxy-jq5mn"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.667476    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-g6m52\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="af518d42-83f8-4dc8-95ad-ae6659a36a4b" pod="kube-system/kindnet-g6m52"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.667760    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-b49dq\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="cd7b9c84-825b-4c88-9282-6ab75d1df072" pod="kube-system/coredns-66bc5c9577-b49dq"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.668041    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-719933\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="64162cca36cad2a0ca53be81f5ca11cf" pod="kube-system/kube-apiserver-pause-719933"
	Oct 06 19:38:53 pause-719933 kubelet[1299]: E1006 19:38:53.668313    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-719933\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="5830075111f19e963836198e0c56d9e1" pod="kube-system/etcd-pause-719933"
	Oct 06 19:38:54 pause-719933 kubelet[1299]: W1006 19:38:54.598049    1299 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 06 19:38:58 pause-719933 kubelet[1299]: E1006 19:38:58.896683    1299 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-719933\" is forbidden: User \"system:node:pause-719933\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-719933' and this object" podUID="5830075111f19e963836198e0c56d9e1" pod="kube-system/etcd-pause-719933"
	Oct 06 19:38:58 pause-719933 kubelet[1299]: E1006 19:38:58.897322    1299 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-719933\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-719933' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 06 19:39:04 pause-719933 kubelet[1299]: W1006 19:39:04.614562    1299 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 06 19:39:11 pause-719933 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 06 19:39:11 pause-719933 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 06 19:39:11 pause-719933 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-719933 -n pause-719933
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-719933 -n pause-719933: exit status 2 (365.763784ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-719933 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-100545 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-100545 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (257.83243ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:51:06Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-100545 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-100545 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-100545 describe deploy/metrics-server -n kube-system: exit status 1 (79.583635ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-100545 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-100545
helpers_test.go:243: (dbg) docker inspect old-k8s-version-100545:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "44567b8f0b33f8cfbca0e390407ff65fe565a80973c93328cbb178c5ca076b3b",
	        "Created": "2025-10-06T19:50:04.309020012Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 185052,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T19:50:04.34596714Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/44567b8f0b33f8cfbca0e390407ff65fe565a80973c93328cbb178c5ca076b3b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/44567b8f0b33f8cfbca0e390407ff65fe565a80973c93328cbb178c5ca076b3b/hostname",
	        "HostsPath": "/var/lib/docker/containers/44567b8f0b33f8cfbca0e390407ff65fe565a80973c93328cbb178c5ca076b3b/hosts",
	        "LogPath": "/var/lib/docker/containers/44567b8f0b33f8cfbca0e390407ff65fe565a80973c93328cbb178c5ca076b3b/44567b8f0b33f8cfbca0e390407ff65fe565a80973c93328cbb178c5ca076b3b-json.log",
	        "Name": "/old-k8s-version-100545",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-100545:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-100545",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "44567b8f0b33f8cfbca0e390407ff65fe565a80973c93328cbb178c5ca076b3b",
	                "LowerDir": "/var/lib/docker/overlay2/7739f5fcc80275a40519b7d8366e72b83c80e8b92ef511dd23fbdc4e3cd32dd2-init/diff:/var/lib/docker/overlay2/950dad8a7486c25883a9845454dded3949e8f3a53166d005e6749ff210bcab80/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7739f5fcc80275a40519b7d8366e72b83c80e8b92ef511dd23fbdc4e3cd32dd2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7739f5fcc80275a40519b7d8366e72b83c80e8b92ef511dd23fbdc4e3cd32dd2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7739f5fcc80275a40519b7d8366e72b83c80e8b92ef511dd23fbdc4e3cd32dd2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-100545",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-100545/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-100545",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-100545",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-100545",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "01f017fe033cce05c8e358f975014e49537dc130c82231ba9266378451512e9a",
	            "SandboxKey": "/var/run/docker/netns/01f017fe033c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-100545": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:1f:b2:e6:6f:96",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "70390eeacb58521b859ee9aa701da0b462d8cfbec3301aa774d326d82c9a1e6e",
	                    "EndpointID": "cfe1e3a2d7e361c7b0dc290415991539a14b03e0b3433ce8e72daed55b7bc3db",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-100545",
	                        "44567b8f0b33"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-100545 -n old-k8s-version-100545
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-100545 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-100545 logs -n 25: (1.1852656s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-053944 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo containerd config dump                                                                                                                                                                                                  │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo crio config                                                                                                                                                                                                             │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ delete  │ -p cilium-053944                                                                                                                                                                                                                              │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │ 06 Oct 25 19:40 UTC │
	│ start   │ -p force-systemd-env-760371 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-760371  │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ force-systemd-flag-203169 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-203169 │ jenkins │ v1.37.0 │ 06 Oct 25 19:47 UTC │ 06 Oct 25 19:47 UTC │
	│ delete  │ -p force-systemd-flag-203169                                                                                                                                                                                                                  │ force-systemd-flag-203169 │ jenkins │ v1.37.0 │ 06 Oct 25 19:47 UTC │ 06 Oct 25 19:47 UTC │
	│ start   │ -p cert-expiration-585086 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-585086    │ jenkins │ v1.37.0 │ 06 Oct 25 19:47 UTC │ 06 Oct 25 19:48 UTC │
	│ delete  │ -p force-systemd-env-760371                                                                                                                                                                                                                   │ force-systemd-env-760371  │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ start   │ -p cert-options-593131 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-593131       │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ ssh     │ cert-options-593131 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-593131       │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ ssh     │ -p cert-options-593131 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-593131       │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ delete  │ -p cert-options-593131                                                                                                                                                                                                                        │ cert-options-593131       │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ start   │ -p old-k8s-version-100545 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:50 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-100545 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 19:49:57
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 19:49:57.572466  184657 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:49:57.572653  184657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:49:57.572687  184657 out.go:374] Setting ErrFile to fd 2...
	I1006 19:49:57.572707  184657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:49:57.573035  184657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:49:57.573570  184657 out.go:368] Setting JSON to false
	I1006 19:49:57.574497  184657 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5533,"bootTime":1759774665,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1006 19:49:57.574599  184657 start.go:140] virtualization:  
	I1006 19:49:57.580155  184657 out.go:179] * [old-k8s-version-100545] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 19:49:57.583521  184657 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 19:49:57.583578  184657 notify.go:220] Checking for updates...
	I1006 19:49:57.589568  184657 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 19:49:57.592617  184657 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:49:57.595546  184657 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	I1006 19:49:57.599220  184657 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 19:49:57.602302  184657 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 19:49:57.606363  184657 config.go:182] Loaded profile config "cert-expiration-585086": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:49:57.606568  184657 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 19:49:57.627897  184657 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 19:49:57.628026  184657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:49:57.687728  184657 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:49:57.674822711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:49:57.687855  184657 docker.go:318] overlay module found
	I1006 19:49:57.691079  184657 out.go:179] * Using the docker driver based on user configuration
	I1006 19:49:57.693926  184657 start.go:304] selected driver: docker
	I1006 19:49:57.693954  184657 start.go:924] validating driver "docker" against <nil>
	I1006 19:49:57.693978  184657 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 19:49:57.694752  184657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:49:57.754222  184657 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:49:57.744320913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:49:57.754376  184657 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 19:49:57.754615  184657 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 19:49:57.757513  184657 out.go:179] * Using Docker driver with root privileges
	I1006 19:49:57.760262  184657 cni.go:84] Creating CNI manager for ""
	I1006 19:49:57.760344  184657 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:49:57.760361  184657 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 19:49:57.760450  184657 start.go:348] cluster config:
	{Name:old-k8s-version-100545 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-100545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:49:57.765298  184657 out.go:179] * Starting "old-k8s-version-100545" primary control-plane node in "old-k8s-version-100545" cluster
	I1006 19:49:57.768244  184657 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 19:49:57.771288  184657 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 19:49:57.774286  184657 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1006 19:49:57.774291  184657 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 19:49:57.774376  184657 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1006 19:49:57.774388  184657 cache.go:58] Caching tarball of preloaded images
	I1006 19:49:57.774475  184657 preload.go:233] Found /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1006 19:49:57.774490  184657 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1006 19:49:57.774609  184657 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/config.json ...
	I1006 19:49:57.774636  184657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/config.json: {Name:mk545750ca392bc9c7ff64d7a7ccb7d480bfec9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:49:57.796434  184657 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 19:49:57.796460  184657 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 19:49:57.796486  184657 cache.go:232] Successfully downloaded all kic artifacts
	I1006 19:49:57.796513  184657 start.go:360] acquireMachinesLock for old-k8s-version-100545: {Name:mk778890d9b94c6d4e2ce6a766d6834cdd4dfb8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 19:49:57.796633  184657 start.go:364] duration metric: took 101.532µs to acquireMachinesLock for "old-k8s-version-100545"
	I1006 19:49:57.796662  184657 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-100545 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-100545 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 19:49:57.796741  184657 start.go:125] createHost starting for "" (driver="docker")
	I1006 19:49:57.800323  184657 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1006 19:49:57.800587  184657 start.go:159] libmachine.API.Create for "old-k8s-version-100545" (driver="docker")
	I1006 19:49:57.800632  184657 client.go:168] LocalClient.Create starting
	I1006 19:49:57.800711  184657 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem
	I1006 19:49:57.800749  184657 main.go:141] libmachine: Decoding PEM data...
	I1006 19:49:57.800767  184657 main.go:141] libmachine: Parsing certificate...
	I1006 19:49:57.800820  184657 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem
	I1006 19:49:57.800849  184657 main.go:141] libmachine: Decoding PEM data...
	I1006 19:49:57.800862  184657 main.go:141] libmachine: Parsing certificate...
	I1006 19:49:57.801243  184657 cli_runner.go:164] Run: docker network inspect old-k8s-version-100545 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 19:49:57.825220  184657 cli_runner.go:211] docker network inspect old-k8s-version-100545 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 19:49:57.825319  184657 network_create.go:284] running [docker network inspect old-k8s-version-100545] to gather additional debugging logs...
	I1006 19:49:57.825340  184657 cli_runner.go:164] Run: docker network inspect old-k8s-version-100545
	W1006 19:49:57.845203  184657 cli_runner.go:211] docker network inspect old-k8s-version-100545 returned with exit code 1
	I1006 19:49:57.845250  184657 network_create.go:287] error running [docker network inspect old-k8s-version-100545]: docker network inspect old-k8s-version-100545: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-100545 not found
	I1006 19:49:57.845268  184657 network_create.go:289] output of [docker network inspect old-k8s-version-100545]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-100545 not found
	
	** /stderr **
	I1006 19:49:57.845395  184657 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:49:57.864238  184657 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7058eae896da IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:57:c0:bd:de:1b} reservation:<nil>}
	I1006 19:49:57.864581  184657 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e321ce8cf6dc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:12:e6:76:87:fe:bb} reservation:<nil>}
	I1006 19:49:57.864907  184657 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-17bdbd7bd7be IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:22:93:f3:19:55:10} reservation:<nil>}
	I1006 19:49:57.865315  184657 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400161ab10}
	I1006 19:49:57.865339  184657 network_create.go:124] attempt to create docker network old-k8s-version-100545 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1006 19:49:57.865399  184657 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-100545 old-k8s-version-100545
	I1006 19:49:57.937313  184657 network_create.go:108] docker network old-k8s-version-100545 192.168.76.0/24 created
	I1006 19:49:57.937350  184657 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-100545" container
	I1006 19:49:57.937486  184657 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 19:49:57.955504  184657 cli_runner.go:164] Run: docker volume create old-k8s-version-100545 --label name.minikube.sigs.k8s.io=old-k8s-version-100545 --label created_by.minikube.sigs.k8s.io=true
	I1006 19:49:57.975164  184657 oci.go:103] Successfully created a docker volume old-k8s-version-100545
	I1006 19:49:57.975246  184657 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-100545-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-100545 --entrypoint /usr/bin/test -v old-k8s-version-100545:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 19:49:58.523360  184657 oci.go:107] Successfully prepared a docker volume old-k8s-version-100545
	I1006 19:49:58.523430  184657 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1006 19:49:58.523450  184657 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 19:49:58.523547  184657 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-100545:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 19:50:04.231820  184657 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-100545:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (5.708218936s)
	I1006 19:50:04.231848  184657 kic.go:203] duration metric: took 5.708395586s to extract preloaded images to volume ...
	W1006 19:50:04.231986  184657 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1006 19:50:04.232089  184657 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 19:50:04.293124  184657 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-100545 --name old-k8s-version-100545 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-100545 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-100545 --network old-k8s-version-100545 --ip 192.168.76.2 --volume old-k8s-version-100545:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 19:50:04.558441  184657 cli_runner.go:164] Run: docker container inspect old-k8s-version-100545 --format={{.State.Running}}
	I1006 19:50:04.587146  184657 cli_runner.go:164] Run: docker container inspect old-k8s-version-100545 --format={{.State.Status}}
	I1006 19:50:04.607460  184657 cli_runner.go:164] Run: docker exec old-k8s-version-100545 stat /var/lib/dpkg/alternatives/iptables
	I1006 19:50:04.661530  184657 oci.go:144] the created container "old-k8s-version-100545" has a running status.
	I1006 19:50:04.661563  184657 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/old-k8s-version-100545/id_rsa...
	I1006 19:50:05.272125  184657 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-2540/.minikube/machines/old-k8s-version-100545/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 19:50:05.292448  184657 cli_runner.go:164] Run: docker container inspect old-k8s-version-100545 --format={{.State.Status}}
	I1006 19:50:05.310326  184657 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 19:50:05.310354  184657 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-100545 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 19:50:05.351584  184657 cli_runner.go:164] Run: docker container inspect old-k8s-version-100545 --format={{.State.Status}}
	I1006 19:50:05.370604  184657 machine.go:93] provisionDockerMachine start ...
	I1006 19:50:05.370708  184657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-100545
	I1006 19:50:05.388652  184657 main.go:141] libmachine: Using SSH client type: native
	I1006 19:50:05.388987  184657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33050 <nil> <nil>}
	I1006 19:50:05.389002  184657 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 19:50:05.389634  184657 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1006 19:50:08.527182  184657 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-100545
	
	I1006 19:50:08.527256  184657 ubuntu.go:182] provisioning hostname "old-k8s-version-100545"
	I1006 19:50:08.527355  184657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-100545
	I1006 19:50:08.545302  184657 main.go:141] libmachine: Using SSH client type: native
	I1006 19:50:08.545615  184657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33050 <nil> <nil>}
	I1006 19:50:08.545632  184657 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-100545 && echo "old-k8s-version-100545" | sudo tee /etc/hostname
	I1006 19:50:08.689363  184657 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-100545
	
	I1006 19:50:08.689442  184657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-100545
	I1006 19:50:08.707727  184657 main.go:141] libmachine: Using SSH client type: native
	I1006 19:50:08.708052  184657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33050 <nil> <nil>}
	I1006 19:50:08.708076  184657 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-100545' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-100545/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-100545' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 19:50:08.843896  184657 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 19:50:08.843922  184657 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-2540/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-2540/.minikube}
	I1006 19:50:08.843939  184657 ubuntu.go:190] setting up certificates
	I1006 19:50:08.843948  184657 provision.go:84] configureAuth start
	I1006 19:50:08.844008  184657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-100545
	I1006 19:50:08.861985  184657 provision.go:143] copyHostCerts
	I1006 19:50:08.862083  184657 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem, removing ...
	I1006 19:50:08.862095  184657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem
	I1006 19:50:08.862181  184657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem (1082 bytes)
	I1006 19:50:08.862288  184657 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem, removing ...
	I1006 19:50:08.862294  184657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem
	I1006 19:50:08.862320  184657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem (1123 bytes)
	I1006 19:50:08.862369  184657 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem, removing ...
	I1006 19:50:08.862374  184657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem
	I1006 19:50:08.862402  184657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem (1675 bytes)
	I1006 19:50:08.862449  184657 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-100545 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-100545]
	I1006 19:50:09.015668  184657 provision.go:177] copyRemoteCerts
	I1006 19:50:09.015774  184657 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 19:50:09.015821  184657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-100545
	I1006 19:50:09.033997  184657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33050 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/old-k8s-version-100545/id_rsa Username:docker}
	I1006 19:50:09.131854  184657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 19:50:09.150319  184657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1006 19:50:09.168628  184657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 19:50:09.187811  184657 provision.go:87] duration metric: took 343.847794ms to configureAuth
	I1006 19:50:09.187851  184657 ubuntu.go:206] setting minikube options for container-runtime
	I1006 19:50:09.188085  184657 config.go:182] Loaded profile config "old-k8s-version-100545": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1006 19:50:09.188234  184657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-100545
	I1006 19:50:09.205444  184657 main.go:141] libmachine: Using SSH client type: native
	I1006 19:50:09.205748  184657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33050 <nil> <nil>}
	I1006 19:50:09.205766  184657 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 19:50:09.458297  184657 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 19:50:09.458405  184657 machine.go:96] duration metric: took 4.087778733s to provisionDockerMachine
	I1006 19:50:09.458438  184657 client.go:171] duration metric: took 11.657794871s to LocalClient.Create
	I1006 19:50:09.458475  184657 start.go:167] duration metric: took 11.657888575s to libmachine.API.Create "old-k8s-version-100545"
	I1006 19:50:09.458497  184657 start.go:293] postStartSetup for "old-k8s-version-100545" (driver="docker")
	I1006 19:50:09.458521  184657 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 19:50:09.458596  184657 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 19:50:09.458663  184657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-100545
	I1006 19:50:09.478572  184657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33050 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/old-k8s-version-100545/id_rsa Username:docker}
	I1006 19:50:09.576282  184657 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 19:50:09.579846  184657 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 19:50:09.579875  184657 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 19:50:09.579886  184657 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/addons for local assets ...
	I1006 19:50:09.579947  184657 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/files for local assets ...
	I1006 19:50:09.580030  184657 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> 43502.pem in /etc/ssl/certs
	I1006 19:50:09.580126  184657 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 19:50:09.587796  184657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:50:09.616074  184657 start.go:296] duration metric: took 157.549738ms for postStartSetup
	I1006 19:50:09.616507  184657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-100545
	I1006 19:50:09.633420  184657 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/config.json ...
	I1006 19:50:09.633706  184657 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 19:50:09.633761  184657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-100545
	I1006 19:50:09.650440  184657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33050 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/old-k8s-version-100545/id_rsa Username:docker}
	I1006 19:50:09.744763  184657 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 19:50:09.749993  184657 start.go:128] duration metric: took 11.953235377s to createHost
	I1006 19:50:09.750020  184657 start.go:83] releasing machines lock for "old-k8s-version-100545", held for 11.953374415s
	I1006 19:50:09.750091  184657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-100545
	I1006 19:50:09.770304  184657 ssh_runner.go:195] Run: cat /version.json
	I1006 19:50:09.770362  184657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-100545
	I1006 19:50:09.770646  184657 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 19:50:09.770706  184657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-100545
	I1006 19:50:09.790875  184657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33050 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/old-k8s-version-100545/id_rsa Username:docker}
	I1006 19:50:09.804499  184657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33050 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/old-k8s-version-100545/id_rsa Username:docker}
	I1006 19:50:09.989415  184657 ssh_runner.go:195] Run: systemctl --version
	I1006 19:50:09.996022  184657 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 19:50:10.042163  184657 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 19:50:10.046885  184657 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 19:50:10.047012  184657 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 19:50:10.076984  184657 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1006 19:50:10.077070  184657 start.go:495] detecting cgroup driver to use...
	I1006 19:50:10.077119  184657 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 19:50:10.077186  184657 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 19:50:10.097161  184657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 19:50:10.111917  184657 docker.go:218] disabling cri-docker service (if available) ...
	I1006 19:50:10.111989  184657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 19:50:10.130030  184657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 19:50:10.148919  184657 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 19:50:10.274351  184657 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 19:50:10.399409  184657 docker.go:234] disabling docker service ...
	I1006 19:50:10.399497  184657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 19:50:10.421715  184657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 19:50:10.436072  184657 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 19:50:10.577554  184657 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 19:50:10.697059  184657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 19:50:10.710302  184657 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 19:50:10.723871  184657 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1006 19:50:10.723954  184657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:50:10.732569  184657 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 19:50:10.732662  184657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:50:10.741508  184657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:50:10.750630  184657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:50:10.759848  184657 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 19:50:10.768547  184657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:50:10.777634  184657 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:50:10.790926  184657 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:50:10.799801  184657 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 19:50:10.807773  184657 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 19:50:10.815301  184657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:50:10.926703  184657 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 19:50:11.062166  184657 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 19:50:11.062313  184657 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 19:50:11.066487  184657 start.go:563] Will wait 60s for crictl version
	I1006 19:50:11.066589  184657 ssh_runner.go:195] Run: which crictl
	I1006 19:50:11.070318  184657 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 19:50:11.096428  184657 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 19:50:11.096590  184657 ssh_runner.go:195] Run: crio --version
	I1006 19:50:11.124382  184657 ssh_runner.go:195] Run: crio --version
	I1006 19:50:11.158695  184657 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1006 19:50:11.160206  184657 cli_runner.go:164] Run: docker network inspect old-k8s-version-100545 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:50:11.178826  184657 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1006 19:50:11.183496  184657 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:50:11.194464  184657 kubeadm.go:883] updating cluster {Name:old-k8s-version-100545 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-100545 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 19:50:11.194580  184657 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1006 19:50:11.194641  184657 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:50:11.228978  184657 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:50:11.229006  184657 crio.go:433] Images already preloaded, skipping extraction
	I1006 19:50:11.229067  184657 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:50:11.270519  184657 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:50:11.270544  184657 cache_images.go:85] Images are preloaded, skipping loading
	I1006 19:50:11.270552  184657 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1006 19:50:11.270652  184657 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-100545 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-100545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 19:50:11.270737  184657 ssh_runner.go:195] Run: crio config
	I1006 19:50:11.345252  184657 cni.go:84] Creating CNI manager for ""
	I1006 19:50:11.345277  184657 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:50:11.345294  184657 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 19:50:11.345319  184657 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-100545 NodeName:old-k8s-version-100545 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 19:50:11.345445  184657 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-100545"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 19:50:11.345536  184657 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1006 19:50:11.354547  184657 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 19:50:11.354618  184657 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 19:50:11.362145  184657 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1006 19:50:11.374793  184657 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 19:50:11.387261  184657 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1006 19:50:11.400948  184657 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1006 19:50:11.404532  184657 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:50:11.414120  184657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:50:11.529510  184657 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:50:11.546015  184657 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545 for IP: 192.168.76.2
	I1006 19:50:11.546037  184657 certs.go:195] generating shared ca certs ...
	I1006 19:50:11.546054  184657 certs.go:227] acquiring lock for ca certs: {Name:mke29bef1b13829052576090d66d8864f7cbc64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:50:11.546266  184657 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key
	I1006 19:50:11.546343  184657 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key
	I1006 19:50:11.546357  184657 certs.go:257] generating profile certs ...
	I1006 19:50:11.546432  184657 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/client.key
	I1006 19:50:11.546452  184657 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/client.crt with IP's: []
	I1006 19:50:11.917424  184657 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/client.crt ...
	I1006 19:50:11.917455  184657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/client.crt: {Name:mk164518c3992f76ff7300af9b91735fdf36bf05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:50:11.917678  184657 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/client.key ...
	I1006 19:50:11.917696  184657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/client.key: {Name:mkb75ba14f83e3719805e050ca53497aff02c9fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:50:11.917786  184657 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/apiserver.key.139d205a
	I1006 19:50:11.917808  184657 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/apiserver.crt.139d205a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1006 19:50:12.077352  184657 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/apiserver.crt.139d205a ...
	I1006 19:50:12.077381  184657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/apiserver.crt.139d205a: {Name:mk4ef4649753085e52c9d3e61bb772c3756e0aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:50:12.077555  184657 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/apiserver.key.139d205a ...
	I1006 19:50:12.077567  184657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/apiserver.key.139d205a: {Name:mk237c3a6665a2b53e518c89aedb245142b992c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:50:12.077652  184657 certs.go:382] copying /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/apiserver.crt.139d205a -> /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/apiserver.crt
	I1006 19:50:12.077729  184657 certs.go:386] copying /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/apiserver.key.139d205a -> /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/apiserver.key
	I1006 19:50:12.077785  184657 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/proxy-client.key
	I1006 19:50:12.077803  184657 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/proxy-client.crt with IP's: []
	I1006 19:50:12.277790  184657 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/proxy-client.crt ...
	I1006 19:50:12.277818  184657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/proxy-client.crt: {Name:mkadb7da3f66e3f38777af445ec5475b4e0f89ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:50:12.277996  184657 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/proxy-client.key ...
	I1006 19:50:12.278010  184657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/proxy-client.key: {Name:mk98c74e04274aba104fc4546a3587568b80cb2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:50:12.278198  184657 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem (1338 bytes)
	W1006 19:50:12.278248  184657 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350_empty.pem, impossibly tiny 0 bytes
	I1006 19:50:12.278262  184657 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 19:50:12.278287  184657 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem (1082 bytes)
	I1006 19:50:12.278315  184657 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem (1123 bytes)
	I1006 19:50:12.278345  184657 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem (1675 bytes)
	I1006 19:50:12.278391  184657 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:50:12.279005  184657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 19:50:12.299151  184657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 19:50:12.318690  184657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 19:50:12.338113  184657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 19:50:12.358766  184657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1006 19:50:12.379046  184657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1006 19:50:12.396905  184657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 19:50:12.415375  184657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 19:50:12.433078  184657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /usr/share/ca-certificates/43502.pem (1708 bytes)
	I1006 19:50:12.450846  184657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 19:50:12.469304  184657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem --> /usr/share/ca-certificates/4350.pem (1338 bytes)
	I1006 19:50:12.487474  184657 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 19:50:12.500728  184657 ssh_runner.go:195] Run: openssl version
	I1006 19:50:12.507073  184657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 19:50:12.515346  184657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:50:12.519288  184657 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:50:12.519371  184657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:50:12.561093  184657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 19:50:12.569720  184657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4350.pem && ln -fs /usr/share/ca-certificates/4350.pem /etc/ssl/certs/4350.pem"
	I1006 19:50:12.577874  184657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4350.pem
	I1006 19:50:12.581739  184657 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 18:49 /usr/share/ca-certificates/4350.pem
	I1006 19:50:12.581804  184657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4350.pem
	I1006 19:50:12.624287  184657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4350.pem /etc/ssl/certs/51391683.0"
	I1006 19:50:12.632463  184657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43502.pem && ln -fs /usr/share/ca-certificates/43502.pem /etc/ssl/certs/43502.pem"
	I1006 19:50:12.640709  184657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43502.pem
	I1006 19:50:12.644695  184657 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 18:49 /usr/share/ca-certificates/43502.pem
	I1006 19:50:12.644783  184657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43502.pem
	I1006 19:50:12.690662  184657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43502.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 19:50:12.699336  184657 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 19:50:12.702739  184657 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 19:50:12.702790  184657 kubeadm.go:400] StartCluster: {Name:old-k8s-version-100545 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-100545 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:50:12.702864  184657 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 19:50:12.702920  184657 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 19:50:12.729775  184657 cri.go:89] found id: ""
	I1006 19:50:12.729949  184657 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 19:50:12.738260  184657 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 19:50:12.746804  184657 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 19:50:12.746902  184657 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 19:50:12.754736  184657 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 19:50:12.754763  184657 kubeadm.go:157] found existing configuration files:
	
	I1006 19:50:12.754844  184657 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 19:50:12.763085  184657 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 19:50:12.763179  184657 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 19:50:12.771025  184657 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 19:50:12.778777  184657 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 19:50:12.778840  184657 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 19:50:12.786189  184657 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 19:50:12.794002  184657 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 19:50:12.794080  184657 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 19:50:12.801414  184657 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 19:50:12.809552  184657 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 19:50:12.809617  184657 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 19:50:12.816910  184657 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 19:50:12.862465  184657 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1006 19:50:12.862685  184657 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 19:50:12.901789  184657 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 19:50:12.901873  184657 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1006 19:50:12.901916  184657 kubeadm.go:318] OS: Linux
	I1006 19:50:12.901971  184657 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 19:50:12.902025  184657 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1006 19:50:12.902078  184657 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 19:50:12.902132  184657 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 19:50:12.902186  184657 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 19:50:12.902252  184657 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 19:50:12.902303  184657 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 19:50:12.902357  184657 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 19:50:12.902412  184657 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1006 19:50:13.005734  184657 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 19:50:13.005858  184657 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 19:50:13.005972  184657 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1006 19:50:13.168459  184657 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 19:50:13.170024  184657 out.go:252]   - Generating certificates and keys ...
	I1006 19:50:13.170113  184657 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 19:50:13.170566  184657 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 19:50:13.376461  184657 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 19:50:13.504716  184657 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 19:50:13.699646  184657 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 19:50:13.956196  184657 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 19:50:14.210659  184657 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 19:50:14.211028  184657 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-100545] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1006 19:50:14.523479  184657 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 19:50:14.523877  184657 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-100545] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1006 19:50:15.315222  184657 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 19:50:15.599236  184657 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 19:50:16.337661  184657 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 19:50:16.337943  184657 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 19:50:16.710832  184657 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 19:50:17.339275  184657 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 19:50:18.226016  184657 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 19:50:18.415897  184657 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 19:50:18.416806  184657 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 19:50:18.419585  184657 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 19:50:18.421299  184657 out.go:252]   - Booting up control plane ...
	I1006 19:50:18.421431  184657 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 19:50:18.427386  184657 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 19:50:18.428836  184657 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 19:50:18.446290  184657 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 19:50:18.447589  184657 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 19:50:18.447644  184657 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 19:50:18.585213  184657 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1006 19:50:26.087690  184657 kubeadm.go:318] [apiclient] All control plane components are healthy after 7.503857 seconds
	I1006 19:50:26.088086  184657 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1006 19:50:26.103693  184657 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1006 19:50:26.631992  184657 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1006 19:50:26.632388  184657 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-100545 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1006 19:50:27.148012  184657 kubeadm.go:318] [bootstrap-token] Using token: 9rl2sp.cwh36jp89itu7gtt
	I1006 19:50:27.149633  184657 out.go:252]   - Configuring RBAC rules ...
	I1006 19:50:27.149766  184657 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1006 19:50:27.158125  184657 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1006 19:50:27.167424  184657 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1006 19:50:27.171323  184657 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1006 19:50:27.175319  184657 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1006 19:50:27.179231  184657 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1006 19:50:27.194488  184657 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1006 19:50:27.474660  184657 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1006 19:50:27.563177  184657 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1006 19:50:27.564826  184657 kubeadm.go:318] 
	I1006 19:50:27.564908  184657 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1006 19:50:27.564919  184657 kubeadm.go:318] 
	I1006 19:50:27.565001  184657 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1006 19:50:27.565010  184657 kubeadm.go:318] 
	I1006 19:50:27.565037  184657 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1006 19:50:27.565548  184657 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1006 19:50:27.565620  184657 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1006 19:50:27.565629  184657 kubeadm.go:318] 
	I1006 19:50:27.565687  184657 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1006 19:50:27.565695  184657 kubeadm.go:318] 
	I1006 19:50:27.565745  184657 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1006 19:50:27.565753  184657 kubeadm.go:318] 
	I1006 19:50:27.565808  184657 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1006 19:50:27.565893  184657 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1006 19:50:27.565968  184657 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1006 19:50:27.565976  184657 kubeadm.go:318] 
	I1006 19:50:27.566248  184657 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1006 19:50:27.566351  184657 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1006 19:50:27.566360  184657 kubeadm.go:318] 
	I1006 19:50:27.566647  184657 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 9rl2sp.cwh36jp89itu7gtt \
	I1006 19:50:27.566767  184657 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:201c13f59a2d25d619915dd6f8e3b060debb373b102f8c8209c331adce5a89dd \
	I1006 19:50:27.567028  184657 kubeadm.go:318] 	--control-plane 
	I1006 19:50:27.567055  184657 kubeadm.go:318] 
	I1006 19:50:27.567382  184657 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1006 19:50:27.567395  184657 kubeadm.go:318] 
	I1006 19:50:27.567684  184657 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 9rl2sp.cwh36jp89itu7gtt \
	I1006 19:50:27.567978  184657 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:201c13f59a2d25d619915dd6f8e3b060debb373b102f8c8209c331adce5a89dd 
	I1006 19:50:27.578797  184657 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1006 19:50:27.578963  184657 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 19:50:27.578984  184657 cni.go:84] Creating CNI manager for ""
	I1006 19:50:27.578992  184657 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:50:27.580503  184657 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1006 19:50:27.581846  184657 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1006 19:50:27.588875  184657 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1006 19:50:27.588900  184657 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1006 19:50:27.607726  184657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1006 19:50:28.612542  184657 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.004769026s)
	I1006 19:50:28.612586  184657 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1006 19:50:28.612703  184657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:50:28.612781  184657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-100545 minikube.k8s.io/updated_at=2025_10_06T19_50_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81 minikube.k8s.io/name=old-k8s-version-100545 minikube.k8s.io/primary=true
	I1006 19:50:28.778170  184657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:50:28.778244  184657 ops.go:34] apiserver oom_adj: -16
	I1006 19:50:29.279033  184657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:50:29.778985  184657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:50:30.278735  184657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:50:30.778574  184657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:50:31.278979  184657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:50:31.779305  184657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:50:32.278229  184657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:50:32.778753  184657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:50:33.278708  184657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:50:33.778957  184657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:50:34.279082  184657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:50:34.779019  184657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:50:35.279228  184657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:50:35.778888  184657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:50:36.278368  184657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:50:36.778311  184657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:50:37.278510  184657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:50:37.778337  184657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:50:38.278270  184657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:50:38.778933  184657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:50:39.278905  184657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:50:39.778283  184657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:50:39.874743  184657 kubeadm.go:1113] duration metric: took 11.262084328s to wait for elevateKubeSystemPrivileges
	I1006 19:50:39.874774  184657 kubeadm.go:402] duration metric: took 27.171986032s to StartCluster
	I1006 19:50:39.874800  184657 settings.go:142] acquiring lock: {Name:mkaade96c66593c653256adaeb0a3029ca60e0c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:50:39.874862  184657 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:50:39.875862  184657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/kubeconfig: {Name:mkf8cce8f9dace5c4d41967d6ad8df5e03ba53f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:50:39.876077  184657 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 19:50:39.876189  184657 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1006 19:50:39.876422  184657 config.go:182] Loaded profile config "old-k8s-version-100545": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1006 19:50:39.876462  184657 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 19:50:39.876520  184657 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-100545"
	I1006 19:50:39.876534  184657 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-100545"
	I1006 19:50:39.876555  184657 host.go:66] Checking if "old-k8s-version-100545" exists ...
	I1006 19:50:39.877030  184657 cli_runner.go:164] Run: docker container inspect old-k8s-version-100545 --format={{.State.Status}}
	I1006 19:50:39.877747  184657 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-100545"
	I1006 19:50:39.877770  184657 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-100545"
	I1006 19:50:39.878052  184657 cli_runner.go:164] Run: docker container inspect old-k8s-version-100545 --format={{.State.Status}}
	I1006 19:50:39.881751  184657 out.go:179] * Verifying Kubernetes components...
	I1006 19:50:39.884333  184657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:50:39.917618  184657 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-100545"
	I1006 19:50:39.917657  184657 host.go:66] Checking if "old-k8s-version-100545" exists ...
	I1006 19:50:39.918063  184657 cli_runner.go:164] Run: docker container inspect old-k8s-version-100545 --format={{.State.Status}}
	I1006 19:50:39.924746  184657 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 19:50:39.926989  184657 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 19:50:39.927053  184657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 19:50:39.927157  184657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-100545
	I1006 19:50:39.940365  184657 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 19:50:39.940389  184657 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 19:50:39.940452  184657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-100545
	I1006 19:50:39.968468  184657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33050 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/old-k8s-version-100545/id_rsa Username:docker}
	I1006 19:50:39.976950  184657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33050 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/old-k8s-version-100545/id_rsa Username:docker}
	I1006 19:50:40.283036  184657 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1006 19:50:40.283168  184657 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:50:40.292653  184657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 19:50:40.296232  184657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 19:50:41.444010  184657 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.160814099s)
	I1006 19:50:41.444145  184657 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.161071801s)
	I1006 19:50:41.444165  184657 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1006 19:50:41.445879  184657 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-100545" to be "Ready" ...
	I1006 19:50:41.757261  184657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.464572039s)
	I1006 19:50:41.757334  184657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.461079865s)
	I1006 19:50:41.772344  184657 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1006 19:50:41.775155  184657 addons.go:514] duration metric: took 1.898675276s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1006 19:50:41.950379  184657 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-100545" context rescaled to 1 replicas
	W1006 19:50:43.450988  184657 node_ready.go:57] node "old-k8s-version-100545" has "Ready":"False" status (will retry)
	W1006 19:50:45.949227  184657 node_ready.go:57] node "old-k8s-version-100545" has "Ready":"False" status (will retry)
	W1006 19:50:47.949288  184657 node_ready.go:57] node "old-k8s-version-100545" has "Ready":"False" status (will retry)
	W1006 19:50:49.949702  184657 node_ready.go:57] node "old-k8s-version-100545" has "Ready":"False" status (will retry)
	W1006 19:50:52.449669  184657 node_ready.go:57] node "old-k8s-version-100545" has "Ready":"False" status (will retry)
	I1006 19:50:54.449344  184657 node_ready.go:49] node "old-k8s-version-100545" is "Ready"
	I1006 19:50:54.449375  184657 node_ready.go:38] duration metric: took 13.003467174s for node "old-k8s-version-100545" to be "Ready" ...
	I1006 19:50:54.449389  184657 api_server.go:52] waiting for apiserver process to appear ...
	I1006 19:50:54.449447  184657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 19:50:54.461063  184657 api_server.go:72] duration metric: took 14.584933879s to wait for apiserver process to appear ...
	I1006 19:50:54.461086  184657 api_server.go:88] waiting for apiserver healthz status ...
	I1006 19:50:54.461105  184657 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1006 19:50:54.478432  184657 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1006 19:50:54.482438  184657 api_server.go:141] control plane version: v1.28.0
	I1006 19:50:54.482472  184657 api_server.go:131] duration metric: took 21.377963ms to wait for apiserver health ...
	I1006 19:50:54.482482  184657 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 19:50:54.488167  184657 system_pods.go:59] 8 kube-system pods found
	I1006 19:50:54.488206  184657 system_pods.go:61] "coredns-5dd5756b68-pbzhb" [2a53f16d-7f31-4ccf-a6e8-e0b485d40c7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:50:54.488213  184657 system_pods.go:61] "etcd-old-k8s-version-100545" [4bbefd5f-0657-4d3b-bb1f-4fa90c467916] Running
	I1006 19:50:54.488219  184657 system_pods.go:61] "kindnet-l292c" [b6fbb8a4-b9fa-43dd-aaa9-9657f706d606] Running
	I1006 19:50:54.488224  184657 system_pods.go:61] "kube-apiserver-old-k8s-version-100545" [a7ef742d-d559-46f7-93da-a4dfb491f13d] Running
	I1006 19:50:54.488229  184657 system_pods.go:61] "kube-controller-manager-old-k8s-version-100545" [ec8c9c46-e338-4387-a86c-51524b77947f] Running
	I1006 19:50:54.488233  184657 system_pods.go:61] "kube-proxy-h4bcn" [1dbb2b48-8c5b-478f-aa23-2ae9bbfef06e] Running
	I1006 19:50:54.488238  184657 system_pods.go:61] "kube-scheduler-old-k8s-version-100545" [78fe852c-4e6a-4964-b900-fdf5da9290b2] Running
	I1006 19:50:54.488244  184657 system_pods.go:61] "storage-provisioner" [ce45de57-885f-44c7-8bc3-19d8c43b20b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 19:50:54.488251  184657 system_pods.go:74] duration metric: took 5.763424ms to wait for pod list to return data ...
	I1006 19:50:54.488260  184657 default_sa.go:34] waiting for default service account to be created ...
	I1006 19:50:54.492997  184657 default_sa.go:45] found service account: "default"
	I1006 19:50:54.493023  184657 default_sa.go:55] duration metric: took 4.757525ms for default service account to be created ...
	I1006 19:50:54.493034  184657 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 19:50:54.502699  184657 system_pods.go:86] 8 kube-system pods found
	I1006 19:50:54.502736  184657 system_pods.go:89] "coredns-5dd5756b68-pbzhb" [2a53f16d-7f31-4ccf-a6e8-e0b485d40c7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:50:54.502742  184657 system_pods.go:89] "etcd-old-k8s-version-100545" [4bbefd5f-0657-4d3b-bb1f-4fa90c467916] Running
	I1006 19:50:54.502776  184657 system_pods.go:89] "kindnet-l292c" [b6fbb8a4-b9fa-43dd-aaa9-9657f706d606] Running
	I1006 19:50:54.502790  184657 system_pods.go:89] "kube-apiserver-old-k8s-version-100545" [a7ef742d-d559-46f7-93da-a4dfb491f13d] Running
	I1006 19:50:54.502795  184657 system_pods.go:89] "kube-controller-manager-old-k8s-version-100545" [ec8c9c46-e338-4387-a86c-51524b77947f] Running
	I1006 19:50:54.502799  184657 system_pods.go:89] "kube-proxy-h4bcn" [1dbb2b48-8c5b-478f-aa23-2ae9bbfef06e] Running
	I1006 19:50:54.502803  184657 system_pods.go:89] "kube-scheduler-old-k8s-version-100545" [78fe852c-4e6a-4964-b900-fdf5da9290b2] Running
	I1006 19:50:54.502808  184657 system_pods.go:89] "storage-provisioner" [ce45de57-885f-44c7-8bc3-19d8c43b20b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 19:50:54.502843  184657 retry.go:31] will retry after 219.569827ms: missing components: kube-dns
	I1006 19:50:54.727098  184657 system_pods.go:86] 8 kube-system pods found
	I1006 19:50:54.727132  184657 system_pods.go:89] "coredns-5dd5756b68-pbzhb" [2a53f16d-7f31-4ccf-a6e8-e0b485d40c7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:50:54.727139  184657 system_pods.go:89] "etcd-old-k8s-version-100545" [4bbefd5f-0657-4d3b-bb1f-4fa90c467916] Running
	I1006 19:50:54.727145  184657 system_pods.go:89] "kindnet-l292c" [b6fbb8a4-b9fa-43dd-aaa9-9657f706d606] Running
	I1006 19:50:54.727149  184657 system_pods.go:89] "kube-apiserver-old-k8s-version-100545" [a7ef742d-d559-46f7-93da-a4dfb491f13d] Running
	I1006 19:50:54.727154  184657 system_pods.go:89] "kube-controller-manager-old-k8s-version-100545" [ec8c9c46-e338-4387-a86c-51524b77947f] Running
	I1006 19:50:54.727159  184657 system_pods.go:89] "kube-proxy-h4bcn" [1dbb2b48-8c5b-478f-aa23-2ae9bbfef06e] Running
	I1006 19:50:54.727164  184657 system_pods.go:89] "kube-scheduler-old-k8s-version-100545" [78fe852c-4e6a-4964-b900-fdf5da9290b2] Running
	I1006 19:50:54.727170  184657 system_pods.go:89] "storage-provisioner" [ce45de57-885f-44c7-8bc3-19d8c43b20b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 19:50:54.727214  184657 retry.go:31] will retry after 247.075393ms: missing components: kube-dns
	I1006 19:50:54.978785  184657 system_pods.go:86] 8 kube-system pods found
	I1006 19:50:54.978824  184657 system_pods.go:89] "coredns-5dd5756b68-pbzhb" [2a53f16d-7f31-4ccf-a6e8-e0b485d40c7a] Running
	I1006 19:50:54.978842  184657 system_pods.go:89] "etcd-old-k8s-version-100545" [4bbefd5f-0657-4d3b-bb1f-4fa90c467916] Running
	I1006 19:50:54.978847  184657 system_pods.go:89] "kindnet-l292c" [b6fbb8a4-b9fa-43dd-aaa9-9657f706d606] Running
	I1006 19:50:54.978855  184657 system_pods.go:89] "kube-apiserver-old-k8s-version-100545" [a7ef742d-d559-46f7-93da-a4dfb491f13d] Running
	I1006 19:50:54.978861  184657 system_pods.go:89] "kube-controller-manager-old-k8s-version-100545" [ec8c9c46-e338-4387-a86c-51524b77947f] Running
	I1006 19:50:54.978865  184657 system_pods.go:89] "kube-proxy-h4bcn" [1dbb2b48-8c5b-478f-aa23-2ae9bbfef06e] Running
	I1006 19:50:54.978869  184657 system_pods.go:89] "kube-scheduler-old-k8s-version-100545" [78fe852c-4e6a-4964-b900-fdf5da9290b2] Running
	I1006 19:50:54.978873  184657 system_pods.go:89] "storage-provisioner" [ce45de57-885f-44c7-8bc3-19d8c43b20b8] Running
	I1006 19:50:54.978897  184657 system_pods.go:126] duration metric: took 485.856948ms to wait for k8s-apps to be running ...
	I1006 19:50:54.978914  184657 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 19:50:54.978983  184657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:50:54.994717  184657 system_svc.go:56] duration metric: took 15.79439ms WaitForService to wait for kubelet
	I1006 19:50:54.994788  184657 kubeadm.go:586] duration metric: took 15.11868004s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 19:50:54.994821  184657 node_conditions.go:102] verifying NodePressure condition ...
	I1006 19:50:54.999178  184657 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 19:50:54.999216  184657 node_conditions.go:123] node cpu capacity is 2
	I1006 19:50:54.999229  184657 node_conditions.go:105] duration metric: took 4.390996ms to run NodePressure ...
	I1006 19:50:54.999242  184657 start.go:241] waiting for startup goroutines ...
	I1006 19:50:54.999250  184657 start.go:246] waiting for cluster config update ...
	I1006 19:50:54.999260  184657 start.go:255] writing updated cluster config ...
	I1006 19:50:54.999541  184657 ssh_runner.go:195] Run: rm -f paused
	I1006 19:50:55.003822  184657 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 19:50:55.008400  184657 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-pbzhb" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:50:55.028251  184657 pod_ready.go:94] pod "coredns-5dd5756b68-pbzhb" is "Ready"
	I1006 19:50:55.028294  184657 pod_ready.go:86] duration metric: took 19.863806ms for pod "coredns-5dd5756b68-pbzhb" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:50:55.032865  184657 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-100545" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:50:55.042097  184657 pod_ready.go:94] pod "etcd-old-k8s-version-100545" is "Ready"
	I1006 19:50:55.042195  184657 pod_ready.go:86] duration metric: took 9.296684ms for pod "etcd-old-k8s-version-100545" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:50:55.046724  184657 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-100545" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:50:55.058259  184657 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-100545" is "Ready"
	I1006 19:50:55.058660  184657 pod_ready.go:86] duration metric: took 11.879906ms for pod "kube-apiserver-old-k8s-version-100545" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:50:55.070420  184657 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-100545" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:50:55.408544  184657 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-100545" is "Ready"
	I1006 19:50:55.408630  184657 pod_ready.go:86] duration metric: took 338.179156ms for pod "kube-controller-manager-old-k8s-version-100545" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:50:55.609627  184657 pod_ready.go:83] waiting for pod "kube-proxy-h4bcn" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:50:56.008136  184657 pod_ready.go:94] pod "kube-proxy-h4bcn" is "Ready"
	I1006 19:50:56.008167  184657 pod_ready.go:86] duration metric: took 398.508769ms for pod "kube-proxy-h4bcn" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:50:56.209234  184657 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-100545" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:50:56.608005  184657 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-100545" is "Ready"
	I1006 19:50:56.608033  184657 pod_ready.go:86] duration metric: took 398.772628ms for pod "kube-scheduler-old-k8s-version-100545" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:50:56.608046  184657 pod_ready.go:40] duration metric: took 1.604189958s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 19:50:56.663025  184657 start.go:623] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1006 19:50:56.666100  184657 out.go:203] 
	W1006 19:50:56.668888  184657 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1006 19:50:56.671862  184657 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1006 19:50:56.675794  184657 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-100545" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 06 19:50:54 old-k8s-version-100545 crio[844]: time="2025-10-06T19:50:54.582653504Z" level=info msg="Created container 29ae5d1edf4ca44550c3e406975a40f10a6e48b3f39f4d1382437815ee277f13: kube-system/coredns-5dd5756b68-pbzhb/coredns" id=c7e88d92-a21f-4e70-919e-700a182e8f30 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:50:54 old-k8s-version-100545 crio[844]: time="2025-10-06T19:50:54.583575783Z" level=info msg="Starting container: 29ae5d1edf4ca44550c3e406975a40f10a6e48b3f39f4d1382437815ee277f13" id=c5e679cc-36d8-43ba-91b7-13b3f10d049d name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 19:50:54 old-k8s-version-100545 crio[844]: time="2025-10-06T19:50:54.587282485Z" level=info msg="Started container" PID=1912 containerID=29ae5d1edf4ca44550c3e406975a40f10a6e48b3f39f4d1382437815ee277f13 description=kube-system/coredns-5dd5756b68-pbzhb/coredns id=c5e679cc-36d8-43ba-91b7-13b3f10d049d name=/runtime.v1.RuntimeService/StartContainer sandboxID=0309f9897c1062129b5893c823006f97d49b7c7d42a7fe2b8bd16bb0e31b4641
	Oct 06 19:50:57 old-k8s-version-100545 crio[844]: time="2025-10-06T19:50:57.221483595Z" level=info msg="Running pod sandbox: default/busybox/POD" id=6617205a-af92-422a-9608-73ed91b04e94 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 19:50:57 old-k8s-version-100545 crio[844]: time="2025-10-06T19:50:57.221556884Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:50:57 old-k8s-version-100545 crio[844]: time="2025-10-06T19:50:57.228346418Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c6796384d420e48f316ea4126015baf6855b291eef1c8b8ba1f0b775f15627ae UID:6d3b5c33-4fd7-4e24-8775-4f0f4c5f0a46 NetNS:/var/run/netns/daaf9a84-372f-4713-9fc6-40112c30d1a3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40024e6cb0}] Aliases:map[]}"
	Oct 06 19:50:57 old-k8s-version-100545 crio[844]: time="2025-10-06T19:50:57.228387018Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 06 19:50:57 old-k8s-version-100545 crio[844]: time="2025-10-06T19:50:57.239475969Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c6796384d420e48f316ea4126015baf6855b291eef1c8b8ba1f0b775f15627ae UID:6d3b5c33-4fd7-4e24-8775-4f0f4c5f0a46 NetNS:/var/run/netns/daaf9a84-372f-4713-9fc6-40112c30d1a3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40024e6cb0}] Aliases:map[]}"
	Oct 06 19:50:57 old-k8s-version-100545 crio[844]: time="2025-10-06T19:50:57.239622154Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 06 19:50:57 old-k8s-version-100545 crio[844]: time="2025-10-06T19:50:57.244643791Z" level=info msg="Ran pod sandbox c6796384d420e48f316ea4126015baf6855b291eef1c8b8ba1f0b775f15627ae with infra container: default/busybox/POD" id=6617205a-af92-422a-9608-73ed91b04e94 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 19:50:57 old-k8s-version-100545 crio[844]: time="2025-10-06T19:50:57.245736486Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=39e0e2f7-77de-4933-abfd-d6d37ce1e1bb name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:50:57 old-k8s-version-100545 crio[844]: time="2025-10-06T19:50:57.245874022Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=39e0e2f7-77de-4933-abfd-d6d37ce1e1bb name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:50:57 old-k8s-version-100545 crio[844]: time="2025-10-06T19:50:57.245911602Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=39e0e2f7-77de-4933-abfd-d6d37ce1e1bb name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:50:57 old-k8s-version-100545 crio[844]: time="2025-10-06T19:50:57.252536391Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6beb89c2-18b1-43d6-b60a-3fe02110a84c name=/runtime.v1.ImageService/PullImage
	Oct 06 19:50:57 old-k8s-version-100545 crio[844]: time="2025-10-06T19:50:57.258136162Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 06 19:50:59 old-k8s-version-100545 crio[844]: time="2025-10-06T19:50:59.200279638Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=6beb89c2-18b1-43d6-b60a-3fe02110a84c name=/runtime.v1.ImageService/PullImage
	Oct 06 19:50:59 old-k8s-version-100545 crio[844]: time="2025-10-06T19:50:59.203677002Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5ffd6516-4e42-45af-8e19-3e2901ee700b name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:50:59 old-k8s-version-100545 crio[844]: time="2025-10-06T19:50:59.20588082Z" level=info msg="Creating container: default/busybox/busybox" id=625cdfef-f6da-4d21-a657-4b6a3008a7be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:50:59 old-k8s-version-100545 crio[844]: time="2025-10-06T19:50:59.20662315Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:50:59 old-k8s-version-100545 crio[844]: time="2025-10-06T19:50:59.211253757Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:50:59 old-k8s-version-100545 crio[844]: time="2025-10-06T19:50:59.211754943Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:50:59 old-k8s-version-100545 crio[844]: time="2025-10-06T19:50:59.229306523Z" level=info msg="Created container 5692eda6cf88f710bcfcc2a5a525c256e21510b1dc47d8854839b1ca89270436: default/busybox/busybox" id=625cdfef-f6da-4d21-a657-4b6a3008a7be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:50:59 old-k8s-version-100545 crio[844]: time="2025-10-06T19:50:59.231933757Z" level=info msg="Starting container: 5692eda6cf88f710bcfcc2a5a525c256e21510b1dc47d8854839b1ca89270436" id=89b0b6da-0815-4ec5-b93e-7a878b31d74b name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 19:50:59 old-k8s-version-100545 crio[844]: time="2025-10-06T19:50:59.234829402Z" level=info msg="Started container" PID=1965 containerID=5692eda6cf88f710bcfcc2a5a525c256e21510b1dc47d8854839b1ca89270436 description=default/busybox/busybox id=89b0b6da-0815-4ec5-b93e-7a878b31d74b name=/runtime.v1.RuntimeService/StartContainer sandboxID=c6796384d420e48f316ea4126015baf6855b291eef1c8b8ba1f0b775f15627ae
	Oct 06 19:51:06 old-k8s-version-100545 crio[844]: time="2025-10-06T19:51:06.064395447Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	5692eda6cf88f       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   c6796384d420e       busybox                                          default
	29ae5d1edf4ca       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      12 seconds ago      Running             coredns                   0                   0309f9897c106       coredns-5dd5756b68-pbzhb                         kube-system
	ad6bd8bbeee9b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago      Running             storage-provisioner       0                   7880779b423cd       storage-provisioner                              kube-system
	3bf04ffbe2cbb       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   991fba3f98a08       kindnet-l292c                                    kube-system
	94afbdb25d835       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      26 seconds ago      Running             kube-proxy                0                   b47b754646165       kube-proxy-h4bcn                                 kube-system
	a2de4782267c0       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      47 seconds ago      Running             kube-apiserver            0                   9779c2194ac2d       kube-apiserver-old-k8s-version-100545            kube-system
	e6c6d8bd866ba       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      47 seconds ago      Running             kube-controller-manager   0                   c9f3d7ff4bdee       kube-controller-manager-old-k8s-version-100545   kube-system
	d5f31246513fb       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      47 seconds ago      Running             etcd                      0                   bb7d0d2dd45bf       etcd-old-k8s-version-100545                      kube-system
	f7d11105bbc80       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      47 seconds ago      Running             kube-scheduler            0                   e3636fa4f53de       kube-scheduler-old-k8s-version-100545            kube-system
	
	
	==> coredns [29ae5d1edf4ca44550c3e406975a40f10a6e48b3f39f4d1382437815ee277f13] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50190 - 32820 "HINFO IN 9129844596481209976.233403710734368692. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.018653202s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-100545
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-100545
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81
	                    minikube.k8s.io/name=old-k8s-version-100545
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_06T19_50_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 06 Oct 2025 19:50:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-100545
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 06 Oct 2025 19:50:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 06 Oct 2025 19:50:58 +0000   Mon, 06 Oct 2025 19:50:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 06 Oct 2025 19:50:58 +0000   Mon, 06 Oct 2025 19:50:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 06 Oct 2025 19:50:58 +0000   Mon, 06 Oct 2025 19:50:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 06 Oct 2025 19:50:58 +0000   Mon, 06 Oct 2025 19:50:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-100545
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 2f2c85a0561043328b5bdefa6d4c9d0d
	  System UUID:                b1b34591-7b1c-445a-99e0-f9c92bb1885f
	  Boot ID:                    28123a58-a718-41b5-bac7-da83ff2d3a0d
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-pbzhb                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     27s
	  kube-system                 etcd-old-k8s-version-100545                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         40s
	  kube-system                 kindnet-l292c                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-100545             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-100545    200m (10%)    0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-h4bcn                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-100545             100m (5%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  Starting                 48s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  48s (x8 over 48s)  kubelet          Node old-k8s-version-100545 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    48s (x8 over 48s)  kubelet          Node old-k8s-version-100545 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     48s (x8 over 48s)  kubelet          Node old-k8s-version-100545 status is now: NodeHasSufficientPID
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s                kubelet          Node old-k8s-version-100545 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s                kubelet          Node old-k8s-version-100545 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s                kubelet          Node old-k8s-version-100545 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node old-k8s-version-100545 event: Registered Node old-k8s-version-100545 in Controller
	  Normal  NodeReady                13s                kubelet          Node old-k8s-version-100545 status is now: NodeReady
	
	
	==> dmesg <==
	[ +11.752506] hrtimer: interrupt took 8273017 ns
	[Oct 6 19:16] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:20] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:21] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:23] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:26] overlayfs: idmapped layers are currently not supported
	[ +26.009516] overlayfs: idmapped layers are currently not supported
	[  +1.884868] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:28] overlayfs: idmapped layers are currently not supported
	[ +38.864395] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:29] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:30] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:31] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:35] overlayfs: idmapped layers are currently not supported
	[ +30.685217] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:37] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:39] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:41] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:48] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:49] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:50] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d5f31246513fb405f815dc40b1f24442362efd0e93e9239d5b8aa00bda7f4d1f] <==
	{"level":"info","ts":"2025-10-06T19:50:20.384285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-06T19:50:20.384418Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-06T19:50:20.385008Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-06T19:50:20.38532Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-06T19:50:20.385145Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-06T19:50:20.385855Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-06T19:50:20.385786Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-06T19:50:21.074518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-06T19:50:21.07459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-06T19:50:21.074608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-10-06T19:50:21.074634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-10-06T19:50:21.074641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-06T19:50:21.074657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-10-06T19:50:21.074678Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-06T19:50:21.079895Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-100545 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-06T19:50:21.079947Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-06T19:50:21.080917Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-06T19:50:21.081016Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-06T19:50:21.081549Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-06T19:50:21.081635Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-06T19:50:21.081661Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-06T19:50:21.081679Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-06T19:50:21.082586Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-06T19:50:21.082878Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-06T19:50:21.082908Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:51:07 up  1:33,  0 user,  load average: 1.96, 1.12, 1.49
	Linux old-k8s-version-100545 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3bf04ffbe2cbb1c904e4869eb81467cf23ee261785abfffebdaa5b0e2af32572] <==
	I1006 19:50:43.553042       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1006 19:50:43.643941       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1006 19:50:43.644079       1 main.go:148] setting mtu 1500 for CNI 
	I1006 19:50:43.644091       1 main.go:178] kindnetd IP family: "ipv4"
	I1006 19:50:43.644107       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-06T19:50:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1006 19:50:43.848766       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1006 19:50:43.851792       1 controller.go:381] "Waiting for informer caches to sync"
	I1006 19:50:43.851825       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1006 19:50:43.860197       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1006 19:50:44.052466       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1006 19:50:44.052564       1 metrics.go:72] Registering metrics
	I1006 19:50:44.052653       1 controller.go:711] "Syncing nftables rules"
	I1006 19:50:53.853366       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1006 19:50:53.853422       1 main.go:301] handling current node
	I1006 19:51:03.850957       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1006 19:51:03.850990       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a2de4782267c08a549dbf7f3bf281d4991a9c523d4cbce29cffc955a1cc0d30c] <==
	I1006 19:50:24.537486       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1006 19:50:24.537586       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1006 19:50:24.546953       1 controller.go:624] quota admission added evaluator for: namespaces
	I1006 19:50:24.563947       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1006 19:50:24.564427       1 aggregator.go:166] initial CRD sync complete...
	I1006 19:50:24.564472       1 autoregister_controller.go:141] Starting autoregister controller
	I1006 19:50:24.564479       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1006 19:50:24.564487       1 cache.go:39] Caches are synced for autoregister controller
	I1006 19:50:24.568420       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1006 19:50:24.615317       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1006 19:50:25.267614       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1006 19:50:25.273211       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1006 19:50:25.273239       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1006 19:50:25.943590       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1006 19:50:26.018603       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1006 19:50:26.105284       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1006 19:50:26.114612       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1006 19:50:26.115681       1 controller.go:624] quota admission added evaluator for: endpoints
	I1006 19:50:26.122816       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1006 19:50:26.438142       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1006 19:50:27.460146       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1006 19:50:27.472685       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1006 19:50:27.486471       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1006 19:50:39.496964       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1006 19:50:39.996993       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [e6c6d8bd866bae373a3a38101a8ceef4b1ed64333a2622ba186d45e506ae4735] <==
	I1006 19:50:39.407628       1 shared_informer.go:318] Caches are synced for resource quota
	I1006 19:50:39.434926       1 shared_informer.go:318] Caches are synced for persistent volume
	I1006 19:50:39.443312       1 shared_informer.go:318] Caches are synced for resource quota
	I1006 19:50:39.512055       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-l292c"
	I1006 19:50:39.535267       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-h4bcn"
	I1006 19:50:39.839790       1 shared_informer.go:318] Caches are synced for garbage collector
	I1006 19:50:39.839819       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1006 19:50:39.885565       1 shared_informer.go:318] Caches are synced for garbage collector
	I1006 19:50:40.017610       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1006 19:50:40.327882       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-9zjm5"
	I1006 19:50:40.372630       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-pbzhb"
	I1006 19:50:40.395840       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="377.710295ms"
	I1006 19:50:40.422776       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="26.883715ms"
	I1006 19:50:40.422867       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="52.719µs"
	I1006 19:50:41.486604       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1006 19:50:41.518576       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-9zjm5"
	I1006 19:50:41.541169       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="55.381521ms"
	I1006 19:50:41.567029       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="25.719348ms"
	I1006 19:50:41.571948       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="119.707µs"
	I1006 19:50:54.172514       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="361.36µs"
	I1006 19:50:54.192704       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.191µs"
	I1006 19:50:54.292566       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1006 19:50:54.769620       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.878µs"
	I1006 19:50:54.803553       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.959857ms"
	I1006 19:50:54.804987       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.786µs"
	
	
	==> kube-proxy [94afbdb25d83589555a9d8f0453ecf3e48a6fee6b82725422f1bf81f3e74f733] <==
	I1006 19:50:40.730810       1 server_others.go:69] "Using iptables proxy"
	I1006 19:50:40.792250       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1006 19:50:40.831564       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 19:50:40.834115       1 server_others.go:152] "Using iptables Proxier"
	I1006 19:50:40.834145       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1006 19:50:40.834153       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1006 19:50:40.834180       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1006 19:50:40.834511       1 server.go:846] "Version info" version="v1.28.0"
	I1006 19:50:40.834522       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:50:40.835781       1 config.go:188] "Starting service config controller"
	I1006 19:50:40.835792       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1006 19:50:40.835808       1 config.go:97] "Starting endpoint slice config controller"
	I1006 19:50:40.835812       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1006 19:50:40.836164       1 config.go:315] "Starting node config controller"
	I1006 19:50:40.836170       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1006 19:50:40.936554       1 shared_informer.go:318] Caches are synced for node config
	I1006 19:50:40.936582       1 shared_informer.go:318] Caches are synced for service config
	I1006 19:50:40.936624       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f7d11105bbc8019838199b82a820d581317e89614b10f4a11f36ef7bfc1a6a20] <==
	W1006 19:50:24.562774       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1006 19:50:24.562850       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1006 19:50:24.562964       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1006 19:50:24.563033       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1006 19:50:24.569087       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1006 19:50:24.569128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1006 19:50:24.569200       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1006 19:50:24.569218       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1006 19:50:24.569280       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1006 19:50:24.569295       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1006 19:50:24.569352       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1006 19:50:24.569366       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1006 19:50:24.569382       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1006 19:50:24.569390       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1006 19:50:24.569402       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1006 19:50:24.569409       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1006 19:50:25.412100       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1006 19:50:25.412142       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1006 19:50:25.451201       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1006 19:50:25.451246       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1006 19:50:25.526730       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1006 19:50:25.526781       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1006 19:50:25.655492       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1006 19:50:25.655526       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1006 19:50:27.237293       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 06 19:50:39 old-k8s-version-100545 kubelet[1353]: I1006 19:50:39.677285    1353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1dbb2b48-8c5b-478f-aa23-2ae9bbfef06e-lib-modules\") pod \"kube-proxy-h4bcn\" (UID: \"1dbb2b48-8c5b-478f-aa23-2ae9bbfef06e\") " pod="kube-system/kube-proxy-h4bcn"
	Oct 06 19:50:39 old-k8s-version-100545 kubelet[1353]: I1006 19:50:39.677354    1353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1dbb2b48-8c5b-478f-aa23-2ae9bbfef06e-xtables-lock\") pod \"kube-proxy-h4bcn\" (UID: \"1dbb2b48-8c5b-478f-aa23-2ae9bbfef06e\") " pod="kube-system/kube-proxy-h4bcn"
	Oct 06 19:50:39 old-k8s-version-100545 kubelet[1353]: I1006 19:50:39.677384    1353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbbwj\" (UniqueName: \"kubernetes.io/projected/1dbb2b48-8c5b-478f-aa23-2ae9bbfef06e-kube-api-access-fbbwj\") pod \"kube-proxy-h4bcn\" (UID: \"1dbb2b48-8c5b-478f-aa23-2ae9bbfef06e\") " pod="kube-system/kube-proxy-h4bcn"
	Oct 06 19:50:39 old-k8s-version-100545 kubelet[1353]: E1006 19:50:39.688409    1353 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 06 19:50:39 old-k8s-version-100545 kubelet[1353]: E1006 19:50:39.688459    1353 projected.go:198] Error preparing data for projected volume kube-api-access-68bqt for pod kube-system/kindnet-l292c: configmap "kube-root-ca.crt" not found
	Oct 06 19:50:39 old-k8s-version-100545 kubelet[1353]: E1006 19:50:39.688583    1353 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b6fbb8a4-b9fa-43dd-aaa9-9657f706d606-kube-api-access-68bqt podName:b6fbb8a4-b9fa-43dd-aaa9-9657f706d606 nodeName:}" failed. No retries permitted until 2025-10-06 19:50:40.188554332 +0000 UTC m=+12.761559549 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-68bqt" (UniqueName: "kubernetes.io/projected/b6fbb8a4-b9fa-43dd-aaa9-9657f706d606-kube-api-access-68bqt") pod "kindnet-l292c" (UID: "b6fbb8a4-b9fa-43dd-aaa9-9657f706d606") : configmap "kube-root-ca.crt" not found
	Oct 06 19:50:39 old-k8s-version-100545 kubelet[1353]: E1006 19:50:39.793249    1353 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 06 19:50:39 old-k8s-version-100545 kubelet[1353]: E1006 19:50:39.793446    1353 projected.go:198] Error preparing data for projected volume kube-api-access-fbbwj for pod kube-system/kube-proxy-h4bcn: configmap "kube-root-ca.crt" not found
	Oct 06 19:50:39 old-k8s-version-100545 kubelet[1353]: E1006 19:50:39.793591    1353 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1dbb2b48-8c5b-478f-aa23-2ae9bbfef06e-kube-api-access-fbbwj podName:1dbb2b48-8c5b-478f-aa23-2ae9bbfef06e nodeName:}" failed. No retries permitted until 2025-10-06 19:50:40.293567839 +0000 UTC m=+12.866573056 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fbbwj" (UniqueName: "kubernetes.io/projected/1dbb2b48-8c5b-478f-aa23-2ae9bbfef06e-kube-api-access-fbbwj") pod "kube-proxy-h4bcn" (UID: "1dbb2b48-8c5b-478f-aa23-2ae9bbfef06e") : configmap "kube-root-ca.crt" not found
	Oct 06 19:50:40 old-k8s-version-100545 kubelet[1353]: W1006 19:50:40.445403    1353 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/44567b8f0b33f8cfbca0e390407ff65fe565a80973c93328cbb178c5ca076b3b/crio-991fba3f98a0875aa264b75f45495ce4bb682c83eed0a5cacfb5270b40884427 WatchSource:0}: Error finding container 991fba3f98a0875aa264b75f45495ce4bb682c83eed0a5cacfb5270b40884427: Status 404 returned error can't find the container with id 991fba3f98a0875aa264b75f45495ce4bb682c83eed0a5cacfb5270b40884427
	Oct 06 19:50:43 old-k8s-version-100545 kubelet[1353]: I1006 19:50:43.744480    1353 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-h4bcn" podStartSLOduration=4.744437729 podCreationTimestamp="2025-10-06 19:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 19:50:40.747356949 +0000 UTC m=+13.320362174" watchObservedRunningTime="2025-10-06 19:50:43.744437729 +0000 UTC m=+16.317442954"
	Oct 06 19:50:47 old-k8s-version-100545 kubelet[1353]: I1006 19:50:47.629757    1353 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-l292c" podStartSLOduration=5.650116039 podCreationTimestamp="2025-10-06 19:50:39 +0000 UTC" firstStartedPulling="2025-10-06 19:50:40.451949775 +0000 UTC m=+13.024954992" lastFinishedPulling="2025-10-06 19:50:43.431532057 +0000 UTC m=+16.004537282" observedRunningTime="2025-10-06 19:50:43.745561407 +0000 UTC m=+16.318566624" watchObservedRunningTime="2025-10-06 19:50:47.629698329 +0000 UTC m=+20.202703545"
	Oct 06 19:50:54 old-k8s-version-100545 kubelet[1353]: I1006 19:50:54.126355    1353 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 06 19:50:54 old-k8s-version-100545 kubelet[1353]: I1006 19:50:54.162232    1353 topology_manager.go:215] "Topology Admit Handler" podUID="ce45de57-885f-44c7-8bc3-19d8c43b20b8" podNamespace="kube-system" podName="storage-provisioner"
	Oct 06 19:50:54 old-k8s-version-100545 kubelet[1353]: I1006 19:50:54.168557    1353 topology_manager.go:215] "Topology Admit Handler" podUID="2a53f16d-7f31-4ccf-a6e8-e0b485d40c7a" podNamespace="kube-system" podName="coredns-5dd5756b68-pbzhb"
	Oct 06 19:50:54 old-k8s-version-100545 kubelet[1353]: I1006 19:50:54.195828    1353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a53f16d-7f31-4ccf-a6e8-e0b485d40c7a-config-volume\") pod \"coredns-5dd5756b68-pbzhb\" (UID: \"2a53f16d-7f31-4ccf-a6e8-e0b485d40c7a\") " pod="kube-system/coredns-5dd5756b68-pbzhb"
	Oct 06 19:50:54 old-k8s-version-100545 kubelet[1353]: I1006 19:50:54.195887    1353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pvp4\" (UniqueName: \"kubernetes.io/projected/2a53f16d-7f31-4ccf-a6e8-e0b485d40c7a-kube-api-access-9pvp4\") pod \"coredns-5dd5756b68-pbzhb\" (UID: \"2a53f16d-7f31-4ccf-a6e8-e0b485d40c7a\") " pod="kube-system/coredns-5dd5756b68-pbzhb"
	Oct 06 19:50:54 old-k8s-version-100545 kubelet[1353]: I1006 19:50:54.195916    1353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4rs9\" (UniqueName: \"kubernetes.io/projected/ce45de57-885f-44c7-8bc3-19d8c43b20b8-kube-api-access-b4rs9\") pod \"storage-provisioner\" (UID: \"ce45de57-885f-44c7-8bc3-19d8c43b20b8\") " pod="kube-system/storage-provisioner"
	Oct 06 19:50:54 old-k8s-version-100545 kubelet[1353]: I1006 19:50:54.195942    1353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ce45de57-885f-44c7-8bc3-19d8c43b20b8-tmp\") pod \"storage-provisioner\" (UID: \"ce45de57-885f-44c7-8bc3-19d8c43b20b8\") " pod="kube-system/storage-provisioner"
	Oct 06 19:50:54 old-k8s-version-100545 kubelet[1353]: W1006 19:50:54.537669    1353 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/44567b8f0b33f8cfbca0e390407ff65fe565a80973c93328cbb178c5ca076b3b/crio-0309f9897c1062129b5893c823006f97d49b7c7d42a7fe2b8bd16bb0e31b4641 WatchSource:0}: Error finding container 0309f9897c1062129b5893c823006f97d49b7c7d42a7fe2b8bd16bb0e31b4641: Status 404 returned error can't find the container with id 0309f9897c1062129b5893c823006f97d49b7c7d42a7fe2b8bd16bb0e31b4641
	Oct 06 19:50:54 old-k8s-version-100545 kubelet[1353]: I1006 19:50:54.787171    1353 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-pbzhb" podStartSLOduration=14.787131384 podCreationTimestamp="2025-10-06 19:50:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 19:50:54.766967241 +0000 UTC m=+27.339972466" watchObservedRunningTime="2025-10-06 19:50:54.787131384 +0000 UTC m=+27.360136601"
	Oct 06 19:50:56 old-k8s-version-100545 kubelet[1353]: I1006 19:50:56.919751    1353 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.919639917 podCreationTimestamp="2025-10-06 19:50:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 19:50:54.809136213 +0000 UTC m=+27.382141447" watchObservedRunningTime="2025-10-06 19:50:56.919639917 +0000 UTC m=+29.492645134"
	Oct 06 19:50:56 old-k8s-version-100545 kubelet[1353]: I1006 19:50:56.919953    1353 topology_manager.go:215] "Topology Admit Handler" podUID="6d3b5c33-4fd7-4e24-8775-4f0f4c5f0a46" podNamespace="default" podName="busybox"
	Oct 06 19:50:57 old-k8s-version-100545 kubelet[1353]: I1006 19:50:57.025591    1353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbr44\" (UniqueName: \"kubernetes.io/projected/6d3b5c33-4fd7-4e24-8775-4f0f4c5f0a46-kube-api-access-tbr44\") pod \"busybox\" (UID: \"6d3b5c33-4fd7-4e24-8775-4f0f4c5f0a46\") " pod="default/busybox"
	Oct 06 19:50:57 old-k8s-version-100545 kubelet[1353]: W1006 19:50:57.244326    1353 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/44567b8f0b33f8cfbca0e390407ff65fe565a80973c93328cbb178c5ca076b3b/crio-c6796384d420e48f316ea4126015baf6855b291eef1c8b8ba1f0b775f15627ae WatchSource:0}: Error finding container c6796384d420e48f316ea4126015baf6855b291eef1c8b8ba1f0b775f15627ae: Status 404 returned error can't find the container with id c6796384d420e48f316ea4126015baf6855b291eef1c8b8ba1f0b775f15627ae
	
	
	==> storage-provisioner [ad6bd8bbeee9b2334208d333358439d9e949d0a7aa54920fa6c972571e171cd7] <==
	I1006 19:50:54.562543       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1006 19:50:54.582229       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1006 19:50:54.582285       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1006 19:50:54.612294       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1006 19:50:54.612560       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-100545_8449d4ba-b53b-45da-9af6-313b9b742aa2!
	I1006 19:50:54.615164       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"411b65f9-f70e-49ad-ba6c-89d8933e38af", APIVersion:"v1", ResourceVersion:"448", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-100545_8449d4ba-b53b-45da-9af6-313b9b742aa2 became leader
	I1006 19:50:54.715174       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-100545_8449d4ba-b53b-45da-9af6-313b9b742aa2!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-100545 -n old-k8s-version-100545
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-100545 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-100545 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-100545 --alsologtostderr -v=1: exit status 80 (2.457968946s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-100545 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 19:52:19.082925  190933 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:52:19.083144  190933 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:52:19.083176  190933 out.go:374] Setting ErrFile to fd 2...
	I1006 19:52:19.083196  190933 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:52:19.083459  190933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:52:19.083756  190933 out.go:368] Setting JSON to false
	I1006 19:52:19.083806  190933 mustload.go:65] Loading cluster: old-k8s-version-100545
	I1006 19:52:19.084220  190933 config.go:182] Loaded profile config "old-k8s-version-100545": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1006 19:52:19.084727  190933 cli_runner.go:164] Run: docker container inspect old-k8s-version-100545 --format={{.State.Status}}
	I1006 19:52:19.101551  190933 host.go:66] Checking if "old-k8s-version-100545" exists ...
	I1006 19:52:19.101869  190933 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:52:19.162116  190933 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-06 19:52:19.152978962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:52:19.162784  190933 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-100545 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1006 19:52:19.166239  190933 out.go:179] * Pausing node old-k8s-version-100545 ... 
	I1006 19:52:19.170092  190933 host.go:66] Checking if "old-k8s-version-100545" exists ...
	I1006 19:52:19.170578  190933 ssh_runner.go:195] Run: systemctl --version
	I1006 19:52:19.170637  190933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-100545
	I1006 19:52:19.187767  190933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/old-k8s-version-100545/id_rsa Username:docker}
	I1006 19:52:19.282399  190933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:52:19.295103  190933 pause.go:51] kubelet running: true
	I1006 19:52:19.295177  190933 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1006 19:52:19.523174  190933 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1006 19:52:19.523258  190933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1006 19:52:19.600955  190933 cri.go:89] found id: "fe27300a0ccb2d60aa2a92ac42db7ebc34ff4c80d2bd745b7157749d807ccd59"
	I1006 19:52:19.600979  190933 cri.go:89] found id: "a580cd1fc5132fd0541524579b8c5f441ed69951103a6c3be50cfa8f18514d5b"
	I1006 19:52:19.600984  190933 cri.go:89] found id: "d6f485a8ab13784b5bea939d07c6e334ff850a29eac20039c5ca941621b9c376"
	I1006 19:52:19.600988  190933 cri.go:89] found id: "bc0287aa0a83ee0795d413e51de320c7bf5087a5050b203fd68bfaaa5765f74e"
	I1006 19:52:19.600991  190933 cri.go:89] found id: "0b711f9a9459848bc688692b203cbbb20fe8f51db6e39706fde7b2385746a690"
	I1006 19:52:19.600994  190933 cri.go:89] found id: "8c6661172ed70b90aa4e68f8c6df968c5115fa2d90d59204af5eb1097fa7726a"
	I1006 19:52:19.600998  190933 cri.go:89] found id: "ee419682aebe0e3fd692c174ff777858ffe04bf8c1a05f5717da689629decf7a"
	I1006 19:52:19.601001  190933 cri.go:89] found id: "a18305a0a761844f2ffb53c14f0baa5064d0e47665887eddacbafe1fa1fc5fc1"
	I1006 19:52:19.601004  190933 cri.go:89] found id: "e5a93ede6956ec8f39200e0ca417de2ddcc0803039e26b14fc6c083f2a84d73c"
	I1006 19:52:19.601010  190933 cri.go:89] found id: "f8406dc707a80365322f2b9f5d84aec9974cb6a529db4ce3e4077d25f365ebc2"
	I1006 19:52:19.601013  190933 cri.go:89] found id: "ee2e449a1dbc6e891f301528edaf5f304cc37cb287756a88bd15304dfbe66e59"
	I1006 19:52:19.601016  190933 cri.go:89] found id: ""
	I1006 19:52:19.601067  190933 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 19:52:19.613609  190933 retry.go:31] will retry after 240.258231ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:52:19Z" level=error msg="open /run/runc: no such file or directory"
	I1006 19:52:19.854966  190933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:52:19.868401  190933 pause.go:51] kubelet running: false
	I1006 19:52:19.868475  190933 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1006 19:52:20.039908  190933 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1006 19:52:20.040040  190933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1006 19:52:20.114272  190933 cri.go:89] found id: "fe27300a0ccb2d60aa2a92ac42db7ebc34ff4c80d2bd745b7157749d807ccd59"
	I1006 19:52:20.114304  190933 cri.go:89] found id: "a580cd1fc5132fd0541524579b8c5f441ed69951103a6c3be50cfa8f18514d5b"
	I1006 19:52:20.114309  190933 cri.go:89] found id: "d6f485a8ab13784b5bea939d07c6e334ff850a29eac20039c5ca941621b9c376"
	I1006 19:52:20.114329  190933 cri.go:89] found id: "bc0287aa0a83ee0795d413e51de320c7bf5087a5050b203fd68bfaaa5765f74e"
	I1006 19:52:20.114333  190933 cri.go:89] found id: "0b711f9a9459848bc688692b203cbbb20fe8f51db6e39706fde7b2385746a690"
	I1006 19:52:20.114370  190933 cri.go:89] found id: "8c6661172ed70b90aa4e68f8c6df968c5115fa2d90d59204af5eb1097fa7726a"
	I1006 19:52:20.114387  190933 cri.go:89] found id: "ee419682aebe0e3fd692c174ff777858ffe04bf8c1a05f5717da689629decf7a"
	I1006 19:52:20.114407  190933 cri.go:89] found id: "a18305a0a761844f2ffb53c14f0baa5064d0e47665887eddacbafe1fa1fc5fc1"
	I1006 19:52:20.114415  190933 cri.go:89] found id: "e5a93ede6956ec8f39200e0ca417de2ddcc0803039e26b14fc6c083f2a84d73c"
	I1006 19:52:20.114422  190933 cri.go:89] found id: "f8406dc707a80365322f2b9f5d84aec9974cb6a529db4ce3e4077d25f365ebc2"
	I1006 19:52:20.114425  190933 cri.go:89] found id: "ee2e449a1dbc6e891f301528edaf5f304cc37cb287756a88bd15304dfbe66e59"
	I1006 19:52:20.114428  190933 cri.go:89] found id: ""
	I1006 19:52:20.114515  190933 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 19:52:20.126186  190933 retry.go:31] will retry after 511.220455ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:52:20Z" level=error msg="open /run/runc: no such file or directory"
	I1006 19:52:20.637652  190933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:52:20.651336  190933 pause.go:51] kubelet running: false
	I1006 19:52:20.651399  190933 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1006 19:52:20.817276  190933 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1006 19:52:20.817376  190933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1006 19:52:20.886205  190933 cri.go:89] found id: "fe27300a0ccb2d60aa2a92ac42db7ebc34ff4c80d2bd745b7157749d807ccd59"
	I1006 19:52:20.886228  190933 cri.go:89] found id: "a580cd1fc5132fd0541524579b8c5f441ed69951103a6c3be50cfa8f18514d5b"
	I1006 19:52:20.886232  190933 cri.go:89] found id: "d6f485a8ab13784b5bea939d07c6e334ff850a29eac20039c5ca941621b9c376"
	I1006 19:52:20.886236  190933 cri.go:89] found id: "bc0287aa0a83ee0795d413e51de320c7bf5087a5050b203fd68bfaaa5765f74e"
	I1006 19:52:20.886239  190933 cri.go:89] found id: "0b711f9a9459848bc688692b203cbbb20fe8f51db6e39706fde7b2385746a690"
	I1006 19:52:20.886243  190933 cri.go:89] found id: "8c6661172ed70b90aa4e68f8c6df968c5115fa2d90d59204af5eb1097fa7726a"
	I1006 19:52:20.886246  190933 cri.go:89] found id: "ee419682aebe0e3fd692c174ff777858ffe04bf8c1a05f5717da689629decf7a"
	I1006 19:52:20.886250  190933 cri.go:89] found id: "a18305a0a761844f2ffb53c14f0baa5064d0e47665887eddacbafe1fa1fc5fc1"
	I1006 19:52:20.886253  190933 cri.go:89] found id: "e5a93ede6956ec8f39200e0ca417de2ddcc0803039e26b14fc6c083f2a84d73c"
	I1006 19:52:20.886284  190933 cri.go:89] found id: "f8406dc707a80365322f2b9f5d84aec9974cb6a529db4ce3e4077d25f365ebc2"
	I1006 19:52:20.886293  190933 cri.go:89] found id: "ee2e449a1dbc6e891f301528edaf5f304cc37cb287756a88bd15304dfbe66e59"
	I1006 19:52:20.886296  190933 cri.go:89] found id: ""
	I1006 19:52:20.886358  190933 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 19:52:20.897720  190933 retry.go:31] will retry after 318.445367ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:52:20Z" level=error msg="open /run/runc: no such file or directory"
	I1006 19:52:21.217296  190933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:52:21.230269  190933 pause.go:51] kubelet running: false
	I1006 19:52:21.230331  190933 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1006 19:52:21.395287  190933 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1006 19:52:21.395415  190933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1006 19:52:21.465365  190933 cri.go:89] found id: "fe27300a0ccb2d60aa2a92ac42db7ebc34ff4c80d2bd745b7157749d807ccd59"
	I1006 19:52:21.465386  190933 cri.go:89] found id: "a580cd1fc5132fd0541524579b8c5f441ed69951103a6c3be50cfa8f18514d5b"
	I1006 19:52:21.465391  190933 cri.go:89] found id: "d6f485a8ab13784b5bea939d07c6e334ff850a29eac20039c5ca941621b9c376"
	I1006 19:52:21.465395  190933 cri.go:89] found id: "bc0287aa0a83ee0795d413e51de320c7bf5087a5050b203fd68bfaaa5765f74e"
	I1006 19:52:21.465398  190933 cri.go:89] found id: "0b711f9a9459848bc688692b203cbbb20fe8f51db6e39706fde7b2385746a690"
	I1006 19:52:21.465402  190933 cri.go:89] found id: "8c6661172ed70b90aa4e68f8c6df968c5115fa2d90d59204af5eb1097fa7726a"
	I1006 19:52:21.465406  190933 cri.go:89] found id: "ee419682aebe0e3fd692c174ff777858ffe04bf8c1a05f5717da689629decf7a"
	I1006 19:52:21.465409  190933 cri.go:89] found id: "a18305a0a761844f2ffb53c14f0baa5064d0e47665887eddacbafe1fa1fc5fc1"
	I1006 19:52:21.465413  190933 cri.go:89] found id: "e5a93ede6956ec8f39200e0ca417de2ddcc0803039e26b14fc6c083f2a84d73c"
	I1006 19:52:21.465420  190933 cri.go:89] found id: "f8406dc707a80365322f2b9f5d84aec9974cb6a529db4ce3e4077d25f365ebc2"
	I1006 19:52:21.465429  190933 cri.go:89] found id: "ee2e449a1dbc6e891f301528edaf5f304cc37cb287756a88bd15304dfbe66e59"
	I1006 19:52:21.465432  190933 cri.go:89] found id: ""
	I1006 19:52:21.465482  190933 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 19:52:21.479997  190933 out.go:203] 
	W1006 19:52:21.482990  190933 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:52:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:52:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1006 19:52:21.483016  190933 out.go:285] * 
	* 
	W1006 19:52:21.487832  190933 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 19:52:21.490845  190933 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-100545 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-100545
helpers_test.go:243: (dbg) docker inspect old-k8s-version-100545:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "44567b8f0b33f8cfbca0e390407ff65fe565a80973c93328cbb178c5ca076b3b",
	        "Created": "2025-10-06T19:50:04.309020012Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 188293,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T19:51:20.978085574Z",
	            "FinishedAt": "2025-10-06T19:51:20.121749444Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/44567b8f0b33f8cfbca0e390407ff65fe565a80973c93328cbb178c5ca076b3b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/44567b8f0b33f8cfbca0e390407ff65fe565a80973c93328cbb178c5ca076b3b/hostname",
	        "HostsPath": "/var/lib/docker/containers/44567b8f0b33f8cfbca0e390407ff65fe565a80973c93328cbb178c5ca076b3b/hosts",
	        "LogPath": "/var/lib/docker/containers/44567b8f0b33f8cfbca0e390407ff65fe565a80973c93328cbb178c5ca076b3b/44567b8f0b33f8cfbca0e390407ff65fe565a80973c93328cbb178c5ca076b3b-json.log",
	        "Name": "/old-k8s-version-100545",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-100545:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-100545",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "44567b8f0b33f8cfbca0e390407ff65fe565a80973c93328cbb178c5ca076b3b",
	                "LowerDir": "/var/lib/docker/overlay2/7739f5fcc80275a40519b7d8366e72b83c80e8b92ef511dd23fbdc4e3cd32dd2-init/diff:/var/lib/docker/overlay2/950dad8a7486c25883a9845454dded3949e8f3a53166d005e6749ff210bcab80/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7739f5fcc80275a40519b7d8366e72b83c80e8b92ef511dd23fbdc4e3cd32dd2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7739f5fcc80275a40519b7d8366e72b83c80e8b92ef511dd23fbdc4e3cd32dd2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7739f5fcc80275a40519b7d8366e72b83c80e8b92ef511dd23fbdc4e3cd32dd2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-100545",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-100545/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-100545",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-100545",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-100545",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "19a11cc0ec1dbf538919e954e81c26f2daca1fc39a4d9eaa88e6b1484b102b48",
	            "SandboxKey": "/var/run/docker/netns/19a11cc0ec1d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-100545": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:07:94:ac:27:56",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "70390eeacb58521b859ee9aa701da0b462d8cfbec3301aa774d326d82c9a1e6e",
	                    "EndpointID": "104306678ed14a41ffe90e6b78fc488e70c07196798f74f6bc97253bdee25926",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-100545",
	                        "44567b8f0b33"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-100545 -n old-k8s-version-100545
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-100545 -n old-k8s-version-100545: exit status 2 (353.261923ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-100545 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-100545 logs -n 25: (1.325297301s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-053944 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo containerd config dump                                                                                                                                                                                                  │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo crio config                                                                                                                                                                                                             │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ delete  │ -p cilium-053944                                                                                                                                                                                                                              │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │ 06 Oct 25 19:40 UTC │
	│ start   │ -p force-systemd-env-760371 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-760371  │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ force-systemd-flag-203169 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-203169 │ jenkins │ v1.37.0 │ 06 Oct 25 19:47 UTC │ 06 Oct 25 19:47 UTC │
	│ delete  │ -p force-systemd-flag-203169                                                                                                                                                                                                                  │ force-systemd-flag-203169 │ jenkins │ v1.37.0 │ 06 Oct 25 19:47 UTC │ 06 Oct 25 19:47 UTC │
	│ start   │ -p cert-expiration-585086 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-585086    │ jenkins │ v1.37.0 │ 06 Oct 25 19:47 UTC │ 06 Oct 25 19:48 UTC │
	│ delete  │ -p force-systemd-env-760371                                                                                                                                                                                                                   │ force-systemd-env-760371  │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ start   │ -p cert-options-593131 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-593131       │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ ssh     │ cert-options-593131 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-593131       │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ ssh     │ -p cert-options-593131 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-593131       │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ delete  │ -p cert-options-593131                                                                                                                                                                                                                        │ cert-options-593131       │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ start   │ -p old-k8s-version-100545 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:50 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-100545 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │                     │
	│ stop    │ -p old-k8s-version-100545 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │ 06 Oct 25 19:51 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-100545 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │ 06 Oct 25 19:51 UTC │
	│ start   │ -p old-k8s-version-100545 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │ 06 Oct 25 19:52 UTC │
	│ start   │ -p cert-expiration-585086 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-585086    │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │                     │
	│ image   │ old-k8s-version-100545 image list --format=json                                                                                                                                                                                               │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:52 UTC │
	│ pause   │ -p old-k8s-version-100545 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 19:51:25
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 19:51:25.966297  188975 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:51:25.966407  188975 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:51:25.966410  188975 out.go:374] Setting ErrFile to fd 2...
	I1006 19:51:25.966422  188975 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:51:25.966702  188975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:51:25.967057  188975 out.go:368] Setting JSON to false
	I1006 19:51:25.968062  188975 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5621,"bootTime":1759774665,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1006 19:51:25.968123  188975 start.go:140] virtualization:  
	I1006 19:51:25.971494  188975 out.go:179] * [cert-expiration-585086] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 19:51:25.975487  188975 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 19:51:25.975605  188975 notify.go:220] Checking for updates...
	I1006 19:51:25.981299  188975 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 19:51:25.984387  188975 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:51:25.987307  188975 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	I1006 19:51:25.991645  188975 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 19:51:25.994873  188975 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 19:51:25.998294  188975 config.go:182] Loaded profile config "cert-expiration-585086": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:51:25.998846  188975 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 19:51:26.035673  188975 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 19:51:26.035796  188975 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:51:26.116278  188975 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-06 19:51:26.100853485 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:51:26.116403  188975 docker.go:318] overlay module found
	I1006 19:51:26.119613  188975 out.go:179] * Using the docker driver based on existing profile
	I1006 19:51:26.122575  188975 start.go:304] selected driver: docker
	I1006 19:51:26.122586  188975 start.go:924] validating driver "docker" against &{Name:cert-expiration-585086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-585086 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:51:26.122681  188975 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 19:51:26.123492  188975 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:51:26.224405  188975 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-06 19:51:26.19342503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:51:26.224692  188975 cni.go:84] Creating CNI manager for ""
	I1006 19:51:26.224748  188975 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:51:26.224786  188975 start.go:348] cluster config:
	{Name:cert-expiration-585086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-585086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1006 19:51:26.228093  188975 out.go:179] * Starting "cert-expiration-585086" primary control-plane node in "cert-expiration-585086" cluster
	I1006 19:51:26.231102  188975 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 19:51:26.233909  188975 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 19:51:26.236856  188975 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:51:26.236904  188975 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1006 19:51:26.236912  188975 cache.go:58] Caching tarball of preloaded images
	I1006 19:51:26.236991  188975 preload.go:233] Found /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1006 19:51:26.236998  188975 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 19:51:26.237112  188975 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/config.json ...
	I1006 19:51:26.237344  188975 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 19:51:26.266378  188975 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 19:51:26.266390  188975 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 19:51:26.266424  188975 cache.go:232] Successfully downloaded all kic artifacts
	I1006 19:51:26.266453  188975 start.go:360] acquireMachinesLock for cert-expiration-585086: {Name:mkfbc592fc0fdee897fdcca1ec0865b663d6035c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 19:51:26.266540  188975 start.go:364] duration metric: took 51.02µs to acquireMachinesLock for "cert-expiration-585086"
	I1006 19:51:26.266559  188975 start.go:96] Skipping create...Using existing machine configuration
	I1006 19:51:26.266567  188975 fix.go:54] fixHost starting: 
	I1006 19:51:26.266862  188975 cli_runner.go:164] Run: docker container inspect cert-expiration-585086 --format={{.State.Status}}
	I1006 19:51:26.292168  188975 fix.go:112] recreateIfNeeded on cert-expiration-585086: state=Running err=<nil>
	W1006 19:51:26.292187  188975 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 19:51:25.699691  188165 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 19:51:25.703226  188165 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 19:51:25.703301  188165 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 19:51:25.703318  188165 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/addons for local assets ...
	I1006 19:51:25.703375  188165 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/files for local assets ...
	I1006 19:51:25.703471  188165 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> 43502.pem in /etc/ssl/certs
	I1006 19:51:25.703585  188165 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 19:51:25.711264  188165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:51:25.728800  188165 start.go:296] duration metric: took 151.303877ms for postStartSetup
	I1006 19:51:25.728882  188165 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 19:51:25.728924  188165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-100545
	I1006 19:51:25.746639  188165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/old-k8s-version-100545/id_rsa Username:docker}
	I1006 19:51:25.840709  188165 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 19:51:25.845369  188165 fix.go:56] duration metric: took 4.92360908s for fixHost
	I1006 19:51:25.845391  188165 start.go:83] releasing machines lock for "old-k8s-version-100545", held for 4.923660051s
	I1006 19:51:25.845461  188165 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-100545
	I1006 19:51:25.862329  188165 ssh_runner.go:195] Run: cat /version.json
	I1006 19:51:25.862368  188165 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 19:51:25.862392  188165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-100545
	I1006 19:51:25.862433  188165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-100545
	I1006 19:51:25.880031  188165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/old-k8s-version-100545/id_rsa Username:docker}
	I1006 19:51:25.885458  188165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/old-k8s-version-100545/id_rsa Username:docker}
	I1006 19:51:26.091917  188165 ssh_runner.go:195] Run: systemctl --version
	I1006 19:51:26.099191  188165 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 19:51:26.175086  188165 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 19:51:26.182457  188165 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 19:51:26.182558  188165 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 19:51:26.197017  188165 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 19:51:26.197038  188165 start.go:495] detecting cgroup driver to use...
	I1006 19:51:26.197096  188165 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 19:51:26.197153  188165 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 19:51:26.216059  188165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 19:51:26.233076  188165 docker.go:218] disabling cri-docker service (if available) ...
	I1006 19:51:26.233138  188165 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 19:51:26.259495  188165 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 19:51:26.287079  188165 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 19:51:26.441792  188165 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 19:51:26.601612  188165 docker.go:234] disabling docker service ...
	I1006 19:51:26.601692  188165 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 19:51:26.618462  188165 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 19:51:26.641188  188165 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 19:51:26.821293  188165 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 19:51:26.975558  188165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 19:51:26.995945  188165 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 19:51:27.013046  188165 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1006 19:51:27.013128  188165 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:51:27.024581  188165 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 19:51:27.024647  188165 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:51:27.034400  188165 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:51:27.044317  188165 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:51:27.054398  188165 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 19:51:27.063447  188165 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:51:27.073356  188165 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:51:27.085150  188165 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:51:27.094564  188165 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 19:51:27.102877  188165 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 19:51:27.111803  188165 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:51:27.267175  188165 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 19:51:27.481454  188165 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 19:51:27.481580  188165 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 19:51:27.485833  188165 start.go:563] Will wait 60s for crictl version
	I1006 19:51:27.485902  188165 ssh_runner.go:195] Run: which crictl
	I1006 19:51:27.489841  188165 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 19:51:27.524574  188165 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 19:51:27.524653  188165 ssh_runner.go:195] Run: crio --version
	I1006 19:51:27.555683  188165 ssh_runner.go:195] Run: crio --version
	I1006 19:51:27.592189  188165 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1006 19:51:27.594928  188165 cli_runner.go:164] Run: docker network inspect old-k8s-version-100545 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:51:27.618374  188165 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1006 19:51:27.622648  188165 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:51:27.632124  188165 kubeadm.go:883] updating cluster {Name:old-k8s-version-100545 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-100545 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 19:51:27.632224  188165 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1006 19:51:27.632274  188165 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:51:27.669743  188165 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:51:27.669763  188165 crio.go:433] Images already preloaded, skipping extraction
	I1006 19:51:27.669822  188165 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:51:27.709931  188165 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:51:27.709952  188165 cache_images.go:85] Images are preloaded, skipping loading
	I1006 19:51:27.709959  188165 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1006 19:51:27.710056  188165 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-100545 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-100545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 19:51:27.710134  188165 ssh_runner.go:195] Run: crio config
	I1006 19:51:27.786884  188165 cni.go:84] Creating CNI manager for ""
	I1006 19:51:27.786904  188165 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:51:27.786923  188165 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 19:51:27.786944  188165 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-100545 NodeName:old-k8s-version-100545 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 19:51:27.787112  188165 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-100545"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 19:51:27.787196  188165 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1006 19:51:27.796868  188165 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 19:51:27.796947  188165 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 19:51:27.812859  188165 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1006 19:51:27.829442  188165 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 19:51:27.841927  188165 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1006 19:51:27.854876  188165 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1006 19:51:27.858886  188165 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:51:27.868862  188165 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:51:28.039917  188165 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:51:28.060912  188165 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545 for IP: 192.168.76.2
	I1006 19:51:28.060932  188165 certs.go:195] generating shared ca certs ...
	I1006 19:51:28.060949  188165 certs.go:227] acquiring lock for ca certs: {Name:mke29bef1b13829052576090d66d8864f7cbc64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:51:28.061092  188165 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key
	I1006 19:51:28.061147  188165 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key
	I1006 19:51:28.061155  188165 certs.go:257] generating profile certs ...
	I1006 19:51:28.061252  188165 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/client.key
	I1006 19:51:28.061312  188165 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/apiserver.key.139d205a
	I1006 19:51:28.061353  188165 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/proxy-client.key
	I1006 19:51:28.061474  188165 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem (1338 bytes)
	W1006 19:51:28.061500  188165 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350_empty.pem, impossibly tiny 0 bytes
	I1006 19:51:28.061509  188165 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 19:51:28.061537  188165 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem (1082 bytes)
	I1006 19:51:28.061559  188165 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem (1123 bytes)
	I1006 19:51:28.061581  188165 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem (1675 bytes)
	I1006 19:51:28.061624  188165 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:51:28.062255  188165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 19:51:28.108588  188165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 19:51:28.130437  188165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 19:51:28.151067  188165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 19:51:28.195671  188165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1006 19:51:28.253549  188165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1006 19:51:28.325078  188165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 19:51:28.361515  188165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 19:51:28.405500  188165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /usr/share/ca-certificates/43502.pem (1708 bytes)
	I1006 19:51:28.426030  188165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 19:51:28.445548  188165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem --> /usr/share/ca-certificates/4350.pem (1338 bytes)
	I1006 19:51:28.466166  188165 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 19:51:28.497423  188165 ssh_runner.go:195] Run: openssl version
	I1006 19:51:28.508934  188165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43502.pem && ln -fs /usr/share/ca-certificates/43502.pem /etc/ssl/certs/43502.pem"
	I1006 19:51:28.525269  188165 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43502.pem
	I1006 19:51:28.530243  188165 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 18:49 /usr/share/ca-certificates/43502.pem
	I1006 19:51:28.530311  188165 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43502.pem
	I1006 19:51:28.572481  188165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43502.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 19:51:28.585036  188165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 19:51:28.596929  188165 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:51:28.602646  188165 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:51:28.602710  188165 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:51:28.646263  188165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 19:51:28.655104  188165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4350.pem && ln -fs /usr/share/ca-certificates/4350.pem /etc/ssl/certs/4350.pem"
	I1006 19:51:28.664775  188165 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4350.pem
	I1006 19:51:28.669081  188165 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 18:49 /usr/share/ca-certificates/4350.pem
	I1006 19:51:28.669164  188165 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4350.pem
	I1006 19:51:28.710348  188165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4350.pem /etc/ssl/certs/51391683.0"
	I1006 19:51:28.718695  188165 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 19:51:28.722911  188165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 19:51:28.767677  188165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 19:51:28.814960  188165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 19:51:28.861374  188165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 19:51:28.926939  188165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 19:51:28.996112  188165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 19:51:29.078240  188165 kubeadm.go:400] StartCluster: {Name:old-k8s-version-100545 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-100545 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:51:29.078342  188165 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 19:51:29.078476  188165 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 19:51:29.122680  188165 cri.go:89] found id: "8c6661172ed70b90aa4e68f8c6df968c5115fa2d90d59204af5eb1097fa7726a"
	I1006 19:51:29.122705  188165 cri.go:89] found id: "ee419682aebe0e3fd692c174ff777858ffe04bf8c1a05f5717da689629decf7a"
	I1006 19:51:29.122720  188165 cri.go:89] found id: "a18305a0a761844f2ffb53c14f0baa5064d0e47665887eddacbafe1fa1fc5fc1"
	I1006 19:51:29.122724  188165 cri.go:89] found id: "e5a93ede6956ec8f39200e0ca417de2ddcc0803039e26b14fc6c083f2a84d73c"
	I1006 19:51:29.122760  188165 cri.go:89] found id: ""
	I1006 19:51:29.122844  188165 ssh_runner.go:195] Run: sudo runc list -f json
	W1006 19:51:29.140556  188165 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:51:29Z" level=error msg="open /run/runc: no such file or directory"
	I1006 19:51:29.140666  188165 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 19:51:29.154169  188165 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 19:51:29.154189  188165 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 19:51:29.154273  188165 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 19:51:29.161632  188165 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 19:51:29.162269  188165 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-100545" does not appear in /home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:51:29.162573  188165 kubeconfig.go:62] /home/jenkins/minikube-integration/21701-2540/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-100545" cluster setting kubeconfig missing "old-k8s-version-100545" context setting]
	I1006 19:51:29.163043  188165 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/kubeconfig: {Name:mkf8cce8f9dace5c4d41967d6ad8df5e03ba53f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:51:29.164686  188165 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 19:51:29.178347  188165 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1006 19:51:29.178383  188165 kubeadm.go:601] duration metric: took 24.187657ms to restartPrimaryControlPlane
	I1006 19:51:29.178393  188165 kubeadm.go:402] duration metric: took 100.163656ms to StartCluster
	I1006 19:51:29.178440  188165 settings.go:142] acquiring lock: {Name:mkaade96c66593c653256adaeb0a3029ca60e0c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:51:29.178545  188165 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:51:29.179589  188165 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/kubeconfig: {Name:mkf8cce8f9dace5c4d41967d6ad8df5e03ba53f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:51:29.179863  188165 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 19:51:29.180225  188165 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 19:51:29.180299  188165 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-100545"
	I1006 19:51:29.180317  188165 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-100545"
	W1006 19:51:29.180327  188165 addons.go:247] addon storage-provisioner should already be in state true
	I1006 19:51:29.180354  188165 host.go:66] Checking if "old-k8s-version-100545" exists ...
	I1006 19:51:29.180847  188165 cli_runner.go:164] Run: docker container inspect old-k8s-version-100545 --format={{.State.Status}}
	I1006 19:51:29.181288  188165 config.go:182] Loaded profile config "old-k8s-version-100545": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1006 19:51:29.181381  188165 addons.go:69] Setting dashboard=true in profile "old-k8s-version-100545"
	I1006 19:51:29.181397  188165 addons.go:238] Setting addon dashboard=true in "old-k8s-version-100545"
	W1006 19:51:29.181425  188165 addons.go:247] addon dashboard should already be in state true
	I1006 19:51:29.181462  188165 host.go:66] Checking if "old-k8s-version-100545" exists ...
	I1006 19:51:29.181938  188165 cli_runner.go:164] Run: docker container inspect old-k8s-version-100545 --format={{.State.Status}}
	I1006 19:51:29.184446  188165 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-100545"
	I1006 19:51:29.184661  188165 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-100545"
	I1006 19:51:29.184982  188165 cli_runner.go:164] Run: docker container inspect old-k8s-version-100545 --format={{.State.Status}}
	I1006 19:51:29.184641  188165 out.go:179] * Verifying Kubernetes components...
	I1006 19:51:29.190692  188165 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:51:29.233294  188165 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1006 19:51:29.238407  188165 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1006 19:51:29.239555  188165 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 19:51:29.242820  188165 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1006 19:51:29.242848  188165 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1006 19:51:29.242922  188165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-100545
	I1006 19:51:29.244776  188165 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 19:51:29.244797  188165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 19:51:29.244862  188165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-100545
	I1006 19:51:29.254614  188165 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-100545"
	W1006 19:51:29.254647  188165 addons.go:247] addon default-storageclass should already be in state true
	I1006 19:51:29.254681  188165 host.go:66] Checking if "old-k8s-version-100545" exists ...
	I1006 19:51:29.255129  188165 cli_runner.go:164] Run: docker container inspect old-k8s-version-100545 --format={{.State.Status}}
	I1006 19:51:29.285147  188165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/old-k8s-version-100545/id_rsa Username:docker}
	I1006 19:51:29.307813  188165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/old-k8s-version-100545/id_rsa Username:docker}
	I1006 19:51:29.326976  188165 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 19:51:29.327003  188165 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 19:51:29.327067  188165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-100545
	I1006 19:51:29.357631  188165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/old-k8s-version-100545/id_rsa Username:docker}
	I1006 19:51:29.526605  188165 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:51:29.558332  188165 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1006 19:51:29.558358  188165 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1006 19:51:29.560651  188165 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 19:51:29.564383  188165 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 19:51:29.588946  188165 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-100545" to be "Ready" ...
	I1006 19:51:29.606406  188165 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1006 19:51:29.606428  188165 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1006 19:51:29.697449  188165 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1006 19:51:29.697470  188165 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1006 19:51:29.773941  188165 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1006 19:51:29.773960  188165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1006 19:51:29.837505  188165 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1006 19:51:29.837524  188165 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1006 19:51:29.861571  188165 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1006 19:51:29.861638  188165 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1006 19:51:29.884325  188165 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1006 19:51:29.884396  188165 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1006 19:51:29.908833  188165 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1006 19:51:29.908908  188165 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1006 19:51:29.930167  188165 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1006 19:51:29.930229  188165 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1006 19:51:29.952406  188165 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1006 19:51:26.295527  188975 out.go:252] * Updating the running docker "cert-expiration-585086" container ...
	I1006 19:51:26.295554  188975 machine.go:93] provisionDockerMachine start ...
	I1006 19:51:26.295639  188975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-585086
	I1006 19:51:26.314924  188975 main.go:141] libmachine: Using SSH client type: native
	I1006 19:51:26.315225  188975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33040 <nil> <nil>}
	I1006 19:51:26.315234  188975 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 19:51:26.467539  188975 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-585086
	
	I1006 19:51:26.467552  188975 ubuntu.go:182] provisioning hostname "cert-expiration-585086"
	I1006 19:51:26.467612  188975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-585086
	I1006 19:51:26.513405  188975 main.go:141] libmachine: Using SSH client type: native
	I1006 19:51:26.513706  188975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33040 <nil> <nil>}
	I1006 19:51:26.513715  188975 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-585086 && echo "cert-expiration-585086" | sudo tee /etc/hostname
	I1006 19:51:26.706988  188975 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-585086
	
	I1006 19:51:26.707055  188975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-585086
	I1006 19:51:26.730667  188975 main.go:141] libmachine: Using SSH client type: native
	I1006 19:51:26.730976  188975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33040 <nil> <nil>}
	I1006 19:51:26.730990  188975 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-585086' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-585086/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-585086' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 19:51:26.880480  188975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 19:51:26.880499  188975 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-2540/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-2540/.minikube}
	I1006 19:51:26.880527  188975 ubuntu.go:190] setting up certificates
	I1006 19:51:26.880536  188975 provision.go:84] configureAuth start
	I1006 19:51:26.880608  188975 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-585086
	I1006 19:51:26.901274  188975 provision.go:143] copyHostCerts
	I1006 19:51:26.901327  188975 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem, removing ...
	I1006 19:51:26.901341  188975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem
	I1006 19:51:26.901404  188975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem (1082 bytes)
	I1006 19:51:26.901499  188975 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem, removing ...
	I1006 19:51:26.901503  188975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem
	I1006 19:51:26.901523  188975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem (1123 bytes)
	I1006 19:51:26.901568  188975 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem, removing ...
	I1006 19:51:26.901571  188975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem
	I1006 19:51:26.901589  188975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem (1675 bytes)
	I1006 19:51:26.901629  188975 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-585086 san=[127.0.0.1 192.168.85.2 cert-expiration-585086 localhost minikube]
	I1006 19:51:28.035060  188975 provision.go:177] copyRemoteCerts
	I1006 19:51:28.035121  188975 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 19:51:28.035161  188975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-585086
	I1006 19:51:28.057739  188975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33040 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/cert-expiration-585086/id_rsa Username:docker}
	I1006 19:51:28.168800  188975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 19:51:28.204751  188975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1006 19:51:28.229212  188975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 19:51:28.273191  188975 provision.go:87] duration metric: took 1.392634966s to configureAuth
	I1006 19:51:28.273209  188975 ubuntu.go:206] setting minikube options for container-runtime
	I1006 19:51:28.273394  188975 config.go:182] Loaded profile config "cert-expiration-585086": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:51:28.273496  188975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-585086
	I1006 19:51:28.297385  188975 main.go:141] libmachine: Using SSH client type: native
	I1006 19:51:28.297675  188975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33040 <nil> <nil>}
	I1006 19:51:28.297695  188975 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 19:51:33.902704  188165 node_ready.go:49] node "old-k8s-version-100545" is "Ready"
	I1006 19:51:33.902732  188165 node_ready.go:38] duration metric: took 4.313734691s for node "old-k8s-version-100545" to be "Ready" ...
	I1006 19:51:33.902745  188165 api_server.go:52] waiting for apiserver process to appear ...
	I1006 19:51:33.902800  188165 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 19:51:33.752611  188975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 19:51:33.752634  188975 machine.go:96] duration metric: took 7.457073304s to provisionDockerMachine
	I1006 19:51:33.752643  188975 start.go:293] postStartSetup for "cert-expiration-585086" (driver="docker")
	I1006 19:51:33.752652  188975 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 19:51:33.752710  188975 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 19:51:33.752757  188975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-585086
	I1006 19:51:33.780885  188975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33040 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/cert-expiration-585086/id_rsa Username:docker}
	I1006 19:51:33.901305  188975 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 19:51:33.908286  188975 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 19:51:33.908313  188975 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 19:51:33.908322  188975 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/addons for local assets ...
	I1006 19:51:33.908385  188975 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/files for local assets ...
	I1006 19:51:33.908471  188975 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> 43502.pem in /etc/ssl/certs
	I1006 19:51:33.908585  188975 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 19:51:33.916770  188975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:51:33.963175  188975 start.go:296] duration metric: took 210.517651ms for postStartSetup
	I1006 19:51:33.963262  188975 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 19:51:33.963309  188975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-585086
	I1006 19:51:33.995970  188975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33040 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/cert-expiration-585086/id_rsa Username:docker}
	I1006 19:51:34.105066  188975 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 19:51:34.111096  188975 fix.go:56] duration metric: took 7.844525861s for fixHost
	I1006 19:51:34.111110  188975 start.go:83] releasing machines lock for "cert-expiration-585086", held for 7.844563072s
	I1006 19:51:34.111176  188975 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-585086
	I1006 19:51:34.135932  188975 ssh_runner.go:195] Run: cat /version.json
	I1006 19:51:34.135993  188975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-585086
	I1006 19:51:34.136160  188975 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 19:51:34.137297  188975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-585086
	I1006 19:51:34.185979  188975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33040 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/cert-expiration-585086/id_rsa Username:docker}
	I1006 19:51:34.188605  188975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33040 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/cert-expiration-585086/id_rsa Username:docker}
	I1006 19:51:34.320599  188975 ssh_runner.go:195] Run: systemctl --version
	I1006 19:51:34.418552  188975 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 19:51:34.497919  188975 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 19:51:34.502680  188975 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 19:51:34.502743  188975 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 19:51:34.519080  188975 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 19:51:34.519094  188975 start.go:495] detecting cgroup driver to use...
	I1006 19:51:34.519138  188975 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 19:51:34.519206  188975 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 19:51:34.537055  188975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 19:51:34.555313  188975 docker.go:218] disabling cri-docker service (if available) ...
	I1006 19:51:34.555386  188975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 19:51:34.576295  188975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 19:51:34.591974  188975 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 19:51:34.867943  188975 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 19:51:35.160838  188975 docker.go:234] disabling docker service ...
	I1006 19:51:35.160927  188975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 19:51:35.186692  188975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 19:51:35.209938  188975 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 19:51:35.467190  188975 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 19:51:35.699182  188975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 19:51:35.739143  188975 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 19:51:35.776234  188975 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 19:51:35.776330  188975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:51:35.797110  188975 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 19:51:35.797193  188975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:51:35.822458  188975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:51:35.840193  188975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:51:35.859324  188975 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 19:51:35.879908  188975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:51:35.901666  188975 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:51:35.917405  188975 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:51:35.936644  188975 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 19:51:35.949429  188975 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 19:51:35.925795  188165 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.365101571s)
	I1006 19:51:36.720356  188165 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.155937694s)
	I1006 19:51:37.285920  188165 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.333434008s)
	I1006 19:51:37.285972  188165 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.383155882s)
	I1006 19:51:37.286015  188165 api_server.go:72] duration metric: took 8.106124772s to wait for apiserver process to appear ...
	I1006 19:51:37.286022  188165 api_server.go:88] waiting for apiserver healthz status ...
	I1006 19:51:37.286039  188165 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1006 19:51:37.289030  188165 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-100545 addons enable metrics-server
	
	I1006 19:51:37.292034  188165 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1006 19:51:37.294913  188165 addons.go:514] duration metric: took 8.114678455s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1006 19:51:37.296055  188165 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1006 19:51:37.297451  188165 api_server.go:141] control plane version: v1.28.0
	I1006 19:51:37.297474  188165 api_server.go:131] duration metric: took 11.446114ms to wait for apiserver health ...
	I1006 19:51:37.297492  188165 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 19:51:37.301146  188165 system_pods.go:59] 8 kube-system pods found
	I1006 19:51:37.301184  188165 system_pods.go:61] "coredns-5dd5756b68-pbzhb" [2a53f16d-7f31-4ccf-a6e8-e0b485d40c7a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:51:37.301197  188165 system_pods.go:61] "etcd-old-k8s-version-100545" [4bbefd5f-0657-4d3b-bb1f-4fa90c467916] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 19:51:37.301203  188165 system_pods.go:61] "kindnet-l292c" [b6fbb8a4-b9fa-43dd-aaa9-9657f706d606] Running
	I1006 19:51:37.301211  188165 system_pods.go:61] "kube-apiserver-old-k8s-version-100545" [a7ef742d-d559-46f7-93da-a4dfb491f13d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 19:51:37.301223  188165 system_pods.go:61] "kube-controller-manager-old-k8s-version-100545" [ec8c9c46-e338-4387-a86c-51524b77947f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 19:51:37.301237  188165 system_pods.go:61] "kube-proxy-h4bcn" [1dbb2b48-8c5b-478f-aa23-2ae9bbfef06e] Running
	I1006 19:51:37.301243  188165 system_pods.go:61] "kube-scheduler-old-k8s-version-100545" [78fe852c-4e6a-4964-b900-fdf5da9290b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 19:51:37.301248  188165 system_pods.go:61] "storage-provisioner" [ce45de57-885f-44c7-8bc3-19d8c43b20b8] Running
	I1006 19:51:37.301254  188165 system_pods.go:74] duration metric: took 3.757043ms to wait for pod list to return data ...
	I1006 19:51:37.301266  188165 default_sa.go:34] waiting for default service account to be created ...
	I1006 19:51:37.303762  188165 default_sa.go:45] found service account: "default"
	I1006 19:51:37.303787  188165 default_sa.go:55] duration metric: took 2.514774ms for default service account to be created ...
	I1006 19:51:37.303798  188165 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 19:51:37.307188  188165 system_pods.go:86] 8 kube-system pods found
	I1006 19:51:37.307219  188165 system_pods.go:89] "coredns-5dd5756b68-pbzhb" [2a53f16d-7f31-4ccf-a6e8-e0b485d40c7a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:51:37.307229  188165 system_pods.go:89] "etcd-old-k8s-version-100545" [4bbefd5f-0657-4d3b-bb1f-4fa90c467916] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 19:51:37.307236  188165 system_pods.go:89] "kindnet-l292c" [b6fbb8a4-b9fa-43dd-aaa9-9657f706d606] Running
	I1006 19:51:37.307243  188165 system_pods.go:89] "kube-apiserver-old-k8s-version-100545" [a7ef742d-d559-46f7-93da-a4dfb491f13d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 19:51:37.307249  188165 system_pods.go:89] "kube-controller-manager-old-k8s-version-100545" [ec8c9c46-e338-4387-a86c-51524b77947f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 19:51:37.307260  188165 system_pods.go:89] "kube-proxy-h4bcn" [1dbb2b48-8c5b-478f-aa23-2ae9bbfef06e] Running
	I1006 19:51:37.307266  188165 system_pods.go:89] "kube-scheduler-old-k8s-version-100545" [78fe852c-4e6a-4964-b900-fdf5da9290b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 19:51:37.307273  188165 system_pods.go:89] "storage-provisioner" [ce45de57-885f-44c7-8bc3-19d8c43b20b8] Running
	I1006 19:51:37.307280  188165 system_pods.go:126] duration metric: took 3.477292ms to wait for k8s-apps to be running ...
	I1006 19:51:37.307291  188165 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 19:51:37.307347  188165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:51:37.321458  188165 system_svc.go:56] duration metric: took 14.158717ms WaitForService to wait for kubelet
	I1006 19:51:37.321488  188165 kubeadm.go:586] duration metric: took 8.141595284s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 19:51:37.321508  188165 node_conditions.go:102] verifying NodePressure condition ...
	I1006 19:51:37.324824  188165 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 19:51:37.324862  188165 node_conditions.go:123] node cpu capacity is 2
	I1006 19:51:37.324875  188165 node_conditions.go:105] duration metric: took 3.361279ms to run NodePressure ...
	I1006 19:51:37.324888  188165 start.go:241] waiting for startup goroutines ...
	I1006 19:51:37.324895  188165 start.go:246] waiting for cluster config update ...
	I1006 19:51:37.324911  188165 start.go:255] writing updated cluster config ...
	I1006 19:51:37.325218  188165 ssh_runner.go:195] Run: rm -f paused
	I1006 19:51:37.328803  188165 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 19:51:37.333457  188165 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-pbzhb" in "kube-system" namespace to be "Ready" or be gone ...
	W1006 19:51:39.339445  188165 pod_ready.go:104] pod "coredns-5dd5756b68-pbzhb" is not "Ready", error: <nil>
	I1006 19:51:35.968487  188975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:51:36.436858  188975 ssh_runner.go:195] Run: sudo systemctl restart crio
	W1006 19:51:41.839179  188165 pod_ready.go:104] pod "coredns-5dd5756b68-pbzhb" is not "Ready", error: <nil>
	W1006 19:51:43.840191  188165 pod_ready.go:104] pod "coredns-5dd5756b68-pbzhb" is not "Ready", error: <nil>
	W1006 19:51:46.339407  188165 pod_ready.go:104] pod "coredns-5dd5756b68-pbzhb" is not "Ready", error: <nil>
	W1006 19:51:48.339937  188165 pod_ready.go:104] pod "coredns-5dd5756b68-pbzhb" is not "Ready", error: <nil>
	W1006 19:51:50.839410  188165 pod_ready.go:104] pod "coredns-5dd5756b68-pbzhb" is not "Ready", error: <nil>
	W1006 19:51:52.840370  188165 pod_ready.go:104] pod "coredns-5dd5756b68-pbzhb" is not "Ready", error: <nil>
	W1006 19:51:55.339137  188165 pod_ready.go:104] pod "coredns-5dd5756b68-pbzhb" is not "Ready", error: <nil>
	W1006 19:51:57.339634  188165 pod_ready.go:104] pod "coredns-5dd5756b68-pbzhb" is not "Ready", error: <nil>
	W1006 19:51:59.840290  188165 pod_ready.go:104] pod "coredns-5dd5756b68-pbzhb" is not "Ready", error: <nil>
	W1006 19:52:02.340120  188165 pod_ready.go:104] pod "coredns-5dd5756b68-pbzhb" is not "Ready", error: <nil>
	W1006 19:52:04.840372  188165 pod_ready.go:104] pod "coredns-5dd5756b68-pbzhb" is not "Ready", error: <nil>
	I1006 19:52:05.839561  188165 pod_ready.go:94] pod "coredns-5dd5756b68-pbzhb" is "Ready"
	I1006 19:52:05.839594  188165 pod_ready.go:86] duration metric: took 28.506110175s for pod "coredns-5dd5756b68-pbzhb" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:52:05.843094  188165 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-100545" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:52:05.848029  188165 pod_ready.go:94] pod "etcd-old-k8s-version-100545" is "Ready"
	I1006 19:52:05.848059  188165 pod_ready.go:86] duration metric: took 4.938109ms for pod "etcd-old-k8s-version-100545" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:52:05.851198  188165 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-100545" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:52:05.856373  188165 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-100545" is "Ready"
	I1006 19:52:05.856402  188165 pod_ready.go:86] duration metric: took 5.178138ms for pod "kube-apiserver-old-k8s-version-100545" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:52:05.859529  188165 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-100545" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:52:06.037460  188165 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-100545" is "Ready"
	I1006 19:52:06.037489  188165 pod_ready.go:86] duration metric: took 177.935402ms for pod "kube-controller-manager-old-k8s-version-100545" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:52:06.238541  188165 pod_ready.go:83] waiting for pod "kube-proxy-h4bcn" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:52:06.638200  188165 pod_ready.go:94] pod "kube-proxy-h4bcn" is "Ready"
	I1006 19:52:06.638229  188165 pod_ready.go:86] duration metric: took 399.65948ms for pod "kube-proxy-h4bcn" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:52:06.838232  188165 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-100545" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:52:07.237125  188165 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-100545" is "Ready"
	I1006 19:52:07.237157  188165 pod_ready.go:86] duration metric: took 398.89741ms for pod "kube-scheduler-old-k8s-version-100545" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:52:07.237169  188165 pod_ready.go:40] duration metric: took 29.908335656s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 19:52:07.290601  188165 start.go:623] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1006 19:52:07.293658  188165 out.go:203] 
	W1006 19:52:07.296549  188165 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1006 19:52:07.299426  188165 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1006 19:52:07.302296  188165 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-100545" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 06 19:52:13 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:13.370780912Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1338ace9-a57c-4dc0-972a-97819012bef5 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:52:13 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:13.371685166Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e3fd6a62-363f-41f1-8a87-4a5cd3250a92 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:52:13 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:13.37277334Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-k25bq/dashboard-metrics-scraper" id=5a332012-e3a3-422d-873b-1cbedb7ceb7e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:52:13 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:13.373012441Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:52:13 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:13.380642059Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:52:13 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:13.381345439Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:52:13 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:13.397811054Z" level=info msg="Created container f8406dc707a80365322f2b9f5d84aec9974cb6a529db4ce3e4077d25f365ebc2: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-k25bq/dashboard-metrics-scraper" id=5a332012-e3a3-422d-873b-1cbedb7ceb7e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:52:13 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:13.404012832Z" level=info msg="Starting container: f8406dc707a80365322f2b9f5d84aec9974cb6a529db4ce3e4077d25f365ebc2" id=ed81f972-6096-4731-94a8-e9ef0c75ffd2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 19:52:13 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:13.408219755Z" level=info msg="Started container" PID=1654 containerID=f8406dc707a80365322f2b9f5d84aec9974cb6a529db4ce3e4077d25f365ebc2 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-k25bq/dashboard-metrics-scraper id=ed81f972-6096-4731-94a8-e9ef0c75ffd2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8f76580e7f1233f860e0041de83572c4d021fe969e5975e1b9170fb4eed66b87
	Oct 06 19:52:13 old-k8s-version-100545 conmon[1652]: conmon f8406dc707a80365322f <ninfo>: container 1654 exited with status 1
	Oct 06 19:52:13 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:13.595750257Z" level=info msg="Removing container: 4835534f30a015cf0222ec9f33e3a80122b221da1e2c28ad5173ebefb907337c" id=21bdfb51-3c6e-4937-895d-298cc29a17b3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 06 19:52:13 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:13.606088335Z" level=info msg="Error loading conmon cgroup of container 4835534f30a015cf0222ec9f33e3a80122b221da1e2c28ad5173ebefb907337c: cgroup deleted" id=21bdfb51-3c6e-4937-895d-298cc29a17b3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 06 19:52:13 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:13.611032345Z" level=info msg="Removed container 4835534f30a015cf0222ec9f33e3a80122b221da1e2c28ad5173ebefb907337c: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-k25bq/dashboard-metrics-scraper" id=21bdfb51-3c6e-4937-895d-298cc29a17b3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 06 19:52:15 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:15.485236718Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:52:15 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:15.491424112Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:52:15 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:15.491460905Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:52:15 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:15.491485816Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:52:15 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:15.494690736Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:52:15 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:15.494731688Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:52:15 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:15.494755122Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:52:15 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:15.497992395Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:52:15 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:15.498027481Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:52:15 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:15.498049381Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:52:15 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:15.501454132Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:52:15 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:15.5014914Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	f8406dc707a80       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago       Exited              dashboard-metrics-scraper   2                   8f76580e7f123       dashboard-metrics-scraper-5f989dc9cf-k25bq       kubernetes-dashboard
	fe27300a0ccb2       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           16 seconds ago      Running             storage-provisioner         2                   bf16ea6d41772       storage-provisioner                              kube-system
	ee2e449a1dbc6       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   30 seconds ago      Running             kubernetes-dashboard        0                   1b79f3aa813cc       kubernetes-dashboard-8694d4445c-c7sw4            kubernetes-dashboard
	a580cd1fc5132       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           47 seconds ago      Running             coredns                     1                   44f1e424bf39f       coredns-5dd5756b68-pbzhb                         kube-system
	09e6a8fae84de       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           47 seconds ago      Running             busybox                     1                   c53fc370cee8d       busybox                                          default
	d6f485a8ab137       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           47 seconds ago      Running             kindnet-cni                 1                   b7ed1e97a198e       kindnet-l292c                                    kube-system
	bc0287aa0a83e       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           47 seconds ago      Running             kube-proxy                  1                   d289175a1774b       kube-proxy-h4bcn                                 kube-system
	0b711f9a94598       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           47 seconds ago      Exited              storage-provisioner         1                   bf16ea6d41772       storage-provisioner                              kube-system
	8c6661172ed70       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           53 seconds ago      Running             kube-apiserver              1                   2e869829dbb99       kube-apiserver-old-k8s-version-100545            kube-system
	ee419682aebe0       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           53 seconds ago      Running             kube-scheduler              1                   7778ba51acb24       kube-scheduler-old-k8s-version-100545            kube-system
	a18305a0a7618       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           53 seconds ago      Running             etcd                        1                   6929c7daae5ef       etcd-old-k8s-version-100545                      kube-system
	e5a93ede6956e       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           53 seconds ago      Running             kube-controller-manager     1                   28f4ad530ff66       kube-controller-manager-old-k8s-version-100545   kube-system
	
	
	==> coredns [a580cd1fc5132fd0541524579b8c5f441ed69951103a6c3be50cfa8f18514d5b] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50085 - 50462 "HINFO IN 5305303664987629050.2966249946981424498. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023412845s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-100545
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-100545
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81
	                    minikube.k8s.io/name=old-k8s-version-100545
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_06T19_50_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 06 Oct 2025 19:50:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-100545
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 06 Oct 2025 19:52:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 06 Oct 2025 19:52:04 +0000   Mon, 06 Oct 2025 19:50:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 06 Oct 2025 19:52:04 +0000   Mon, 06 Oct 2025 19:50:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 06 Oct 2025 19:52:04 +0000   Mon, 06 Oct 2025 19:50:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 06 Oct 2025 19:52:04 +0000   Mon, 06 Oct 2025 19:50:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-100545
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 5c19813e2e1342a290c4893cbe069a28
	  System UUID:                b1b34591-7b1c-445a-99e0-f9c92bb1885f
	  Boot ID:                    28123a58-a718-41b5-bac7-da83ff2d3a0d
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 coredns-5dd5756b68-pbzhb                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     102s
	  kube-system                 etcd-old-k8s-version-100545                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         115s
	  kube-system                 kindnet-l292c                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-old-k8s-version-100545             250m (12%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-old-k8s-version-100545    200m (10%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-h4bcn                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-old-k8s-version-100545             100m (5%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-k25bq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-c7sw4             0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 101s                 kube-proxy       
	  Normal  Starting                 46s                  kube-proxy       
	  Normal  Starting                 2m3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m3s (x8 over 2m3s)  kubelet          Node old-k8s-version-100545 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s (x8 over 2m3s)  kubelet          Node old-k8s-version-100545 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s (x8 over 2m3s)  kubelet          Node old-k8s-version-100545 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    115s                 kubelet          Node old-k8s-version-100545 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  115s                 kubelet          Node old-k8s-version-100545 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     115s                 kubelet          Node old-k8s-version-100545 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           103s                 node-controller  Node old-k8s-version-100545 event: Registered Node old-k8s-version-100545 in Controller
	  Normal  NodeReady                88s                  kubelet          Node old-k8s-version-100545 status is now: NodeReady
	  Normal  Starting                 54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)    kubelet          Node old-k8s-version-100545 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)    kubelet          Node old-k8s-version-100545 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)    kubelet          Node old-k8s-version-100545 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           35s                  node-controller  Node old-k8s-version-100545 event: Registered Node old-k8s-version-100545 in Controller
	
	
	==> dmesg <==
	[Oct 6 19:16] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:20] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:21] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:23] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:26] overlayfs: idmapped layers are currently not supported
	[ +26.009516] overlayfs: idmapped layers are currently not supported
	[  +1.884868] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:28] overlayfs: idmapped layers are currently not supported
	[ +38.864395] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:29] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:30] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:31] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:35] overlayfs: idmapped layers are currently not supported
	[ +30.685217] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:37] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:39] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:41] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:48] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:49] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:50] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:51] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a18305a0a761844f2ffb53c14f0baa5064d0e47665887eddacbafe1fa1fc5fc1] <==
	{"level":"info","ts":"2025-10-06T19:51:29.333287Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-06T19:51:29.333309Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-06T19:51:29.341286Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-06T19:51:29.34138Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-06T19:51:29.341463Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-06T19:51:29.341501Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-06T19:51:29.447799Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-06T19:51:29.447994Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-06T19:51:29.450749Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-06T19:51:29.451436Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-06T19:51:29.451511Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-06T19:51:31.13974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-06T19:51:31.139868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-06T19:51:31.139918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-06T19:51:31.139954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-06T19:51:31.139995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-06T19:51:31.140034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-06T19:51:31.140063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-06T19:51:31.145085Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-100545 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-06T19:51:31.145203Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-06T19:51:31.147449Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-06T19:51:31.147762Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-06T19:51:31.148814Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-06T19:51:31.155155Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-06T19:51:31.155248Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:52:22 up  1:34,  0 user,  load average: 2.06, 1.36, 1.54
	Linux old-k8s-version-100545 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d6f485a8ab13784b5bea939d07c6e334ff850a29eac20039c5ca941621b9c376] <==
	I1006 19:51:35.247116       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1006 19:51:35.247486       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1006 19:51:35.247650       1 main.go:148] setting mtu 1500 for CNI 
	I1006 19:51:35.247661       1 main.go:178] kindnetd IP family: "ipv4"
	I1006 19:51:35.247674       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-06T19:51:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1006 19:51:35.491924       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1006 19:51:35.491970       1 controller.go:381] "Waiting for informer caches to sync"
	I1006 19:51:35.491984       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1006 19:51:35.492425       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1006 19:52:05.492850       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1006 19:52:05.492982       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1006 19:52:05.493037       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1006 19:52:05.549470       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1006 19:52:06.992973       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1006 19:52:06.993085       1 metrics.go:72] Registering metrics
	I1006 19:52:06.993152       1 controller.go:711] "Syncing nftables rules"
	I1006 19:52:15.484263       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1006 19:52:15.484302       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8c6661172ed70b90aa4e68f8c6df968c5115fa2d90d59204af5eb1097fa7726a] <==
	I1006 19:51:33.638137       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1006 19:51:33.948190       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1006 19:51:33.952806       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1006 19:51:33.964233       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1006 19:51:33.964330       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1006 19:51:33.964527       1 shared_informer.go:318] Caches are synced for configmaps
	I1006 19:51:33.965929       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1006 19:51:33.978474       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1006 19:51:33.980492       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1006 19:51:33.983014       1 aggregator.go:166] initial CRD sync complete...
	I1006 19:51:33.983114       1 autoregister_controller.go:141] Starting autoregister controller
	I1006 19:51:33.983144       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1006 19:51:33.983173       1 cache.go:39] Caches are synced for autoregister controller
	I1006 19:51:34.001620       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1006 19:51:34.645994       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1006 19:51:37.103375       1 controller.go:624] quota admission added evaluator for: namespaces
	I1006 19:51:37.147030       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1006 19:51:37.185084       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1006 19:51:37.197476       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1006 19:51:37.208471       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1006 19:51:37.260253       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.23.159"}
	I1006 19:51:37.278347       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.173.186"}
	I1006 19:51:47.202353       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1006 19:51:47.208828       1 controller.go:624] quota admission added evaluator for: endpoints
	I1006 19:51:47.214749       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [e5a93ede6956ec8f39200e0ca417de2ddcc0803039e26b14fc6c083f2a84d73c] <==
	I1006 19:51:47.272777       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-c7sw4"
	I1006 19:51:47.283840       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-k25bq"
	I1006 19:51:47.292158       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="55.498993ms"
	I1006 19:51:47.301373       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.174264ms"
	I1006 19:51:47.307327       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.099583ms"
	I1006 19:51:47.307568       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="60.432µs"
	I1006 19:51:47.312863       1 shared_informer.go:318] Caches are synced for resource quota
	I1006 19:51:47.316285       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="14.838016ms"
	I1006 19:51:47.326439       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="90.003µs"
	I1006 19:51:47.343358       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="26.934734ms"
	I1006 19:51:47.343487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.279µs"
	I1006 19:51:47.344848       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="92.851µs"
	I1006 19:51:47.381091       1 shared_informer.go:318] Caches are synced for attach detach
	I1006 19:51:47.754332       1 shared_informer.go:318] Caches are synced for garbage collector
	I1006 19:51:47.767862       1 shared_informer.go:318] Caches are synced for garbage collector
	I1006 19:51:47.767908       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1006 19:51:52.554792       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="21.607072ms"
	I1006 19:51:52.554909       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="78.508µs"
	I1006 19:51:56.553551       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="148.368µs"
	I1006 19:51:57.557465       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.643µs"
	I1006 19:51:58.555660       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.815µs"
	I1006 19:52:05.445970       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.085135ms"
	I1006 19:52:05.446052       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="47.091µs"
	I1006 19:52:13.617839       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.575µs"
	I1006 19:52:17.618729       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="52.538µs"
	
	
	==> kube-proxy [bc0287aa0a83ee0795d413e51de320c7bf5087a5050b203fd68bfaaa5765f74e] <==
	I1006 19:51:36.271636       1 server_others.go:69] "Using iptables proxy"
	I1006 19:51:36.318897       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1006 19:51:36.456529       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 19:51:36.476057       1 server_others.go:152] "Using iptables Proxier"
	I1006 19:51:36.476101       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1006 19:51:36.476114       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1006 19:51:36.476173       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1006 19:51:36.476465       1 server.go:846] "Version info" version="v1.28.0"
	I1006 19:51:36.476482       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:51:36.478215       1 config.go:188] "Starting service config controller"
	I1006 19:51:36.478250       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1006 19:51:36.478276       1 config.go:97] "Starting endpoint slice config controller"
	I1006 19:51:36.478281       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1006 19:51:36.491956       1 config.go:315] "Starting node config controller"
	I1006 19:51:36.491979       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1006 19:51:36.580230       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1006 19:51:36.580291       1 shared_informer.go:318] Caches are synced for service config
	I1006 19:51:36.600585       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [ee419682aebe0e3fd692c174ff777858ffe04bf8c1a05f5717da689629decf7a] <==
	I1006 19:51:30.833262       1 serving.go:348] Generated self-signed cert in-memory
	W1006 19:51:33.708070       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1006 19:51:33.708113       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1006 19:51:33.708124       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1006 19:51:33.708132       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1006 19:51:33.851485       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1006 19:51:33.851596       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:51:33.853523       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1006 19:51:33.853741       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:51:33.853789       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1006 19:51:33.860359       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W1006 19:51:33.942706       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1006 19:51:33.942805       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1006 19:51:33.942903       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1006 19:51:33.942943       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1006 19:51:33.943035       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1006 19:51:33.943070       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1006 19:51:33.943627       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1006 19:51:33.943688       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1006 19:51:33.967151       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1006 19:51:33.967643       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1006 19:51:35.268032       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 06 19:51:47 old-k8s-version-100545 kubelet[775]: I1006 19:51:47.340599     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ba0aac8a-6f26-492c-859a-fbc86c90eb65-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-k25bq\" (UID: \"ba0aac8a-6f26-492c-859a-fbc86c90eb65\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-k25bq"
	Oct 06 19:51:47 old-k8s-version-100545 kubelet[775]: I1006 19:51:47.340719     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/953e3fd1-a661-4e2a-9079-eeeb2e0e3746-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-c7sw4\" (UID: \"953e3fd1-a661-4e2a-9079-eeeb2e0e3746\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-c7sw4"
	Oct 06 19:51:47 old-k8s-version-100545 kubelet[775]: I1006 19:51:47.340827     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td8xp\" (UniqueName: \"kubernetes.io/projected/953e3fd1-a661-4e2a-9079-eeeb2e0e3746-kube-api-access-td8xp\") pod \"kubernetes-dashboard-8694d4445c-c7sw4\" (UID: \"953e3fd1-a661-4e2a-9079-eeeb2e0e3746\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-c7sw4"
	Oct 06 19:51:47 old-k8s-version-100545 kubelet[775]: W1006 19:51:47.640086     775 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/44567b8f0b33f8cfbca0e390407ff65fe565a80973c93328cbb178c5ca076b3b/crio-8f76580e7f1233f860e0041de83572c4d021fe969e5975e1b9170fb4eed66b87 WatchSource:0}: Error finding container 8f76580e7f1233f860e0041de83572c4d021fe969e5975e1b9170fb4eed66b87: Status 404 returned error can't find the container with id 8f76580e7f1233f860e0041de83572c4d021fe969e5975e1b9170fb4eed66b87
	Oct 06 19:51:47 old-k8s-version-100545 kubelet[775]: W1006 19:51:47.640436     775 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/44567b8f0b33f8cfbca0e390407ff65fe565a80973c93328cbb178c5ca076b3b/crio-1b79f3aa813cc9c0e40607bce91a8cbef2deebe431b888c64281b2bef64d0cbb WatchSource:0}: Error finding container 1b79f3aa813cc9c0e40607bce91a8cbef2deebe431b888c64281b2bef64d0cbb: Status 404 returned error can't find the container with id 1b79f3aa813cc9c0e40607bce91a8cbef2deebe431b888c64281b2bef64d0cbb
	Oct 06 19:51:56 old-k8s-version-100545 kubelet[775]: I1006 19:51:56.532626     775 scope.go:117] "RemoveContainer" containerID="b2dff1f776cbda15b9146d9dc0b5c45b9e942d4ae7d8fee34936aa9fd9d49383"
	Oct 06 19:51:56 old-k8s-version-100545 kubelet[775]: I1006 19:51:56.554507     775 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-c7sw4" podStartSLOduration=5.131379207 podCreationTimestamp="2025-10-06 19:51:47 +0000 UTC" firstStartedPulling="2025-10-06 19:51:47.64680959 +0000 UTC m=+19.572554147" lastFinishedPulling="2025-10-06 19:51:52.069854077 +0000 UTC m=+23.995598634" observedRunningTime="2025-10-06 19:51:52.540663209 +0000 UTC m=+24.466407774" watchObservedRunningTime="2025-10-06 19:51:56.554423694 +0000 UTC m=+28.480168259"
	Oct 06 19:51:57 old-k8s-version-100545 kubelet[775]: I1006 19:51:57.536663     775 scope.go:117] "RemoveContainer" containerID="b2dff1f776cbda15b9146d9dc0b5c45b9e942d4ae7d8fee34936aa9fd9d49383"
	Oct 06 19:51:57 old-k8s-version-100545 kubelet[775]: I1006 19:51:57.536994     775 scope.go:117] "RemoveContainer" containerID="4835534f30a015cf0222ec9f33e3a80122b221da1e2c28ad5173ebefb907337c"
	Oct 06 19:51:57 old-k8s-version-100545 kubelet[775]: E1006 19:51:57.537268     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-k25bq_kubernetes-dashboard(ba0aac8a-6f26-492c-859a-fbc86c90eb65)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-k25bq" podUID="ba0aac8a-6f26-492c-859a-fbc86c90eb65"
	Oct 06 19:51:58 old-k8s-version-100545 kubelet[775]: I1006 19:51:58.540857     775 scope.go:117] "RemoveContainer" containerID="4835534f30a015cf0222ec9f33e3a80122b221da1e2c28ad5173ebefb907337c"
	Oct 06 19:51:58 old-k8s-version-100545 kubelet[775]: E1006 19:51:58.541141     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-k25bq_kubernetes-dashboard(ba0aac8a-6f26-492c-859a-fbc86c90eb65)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-k25bq" podUID="ba0aac8a-6f26-492c-859a-fbc86c90eb65"
	Oct 06 19:51:59 old-k8s-version-100545 kubelet[775]: I1006 19:51:59.543310     775 scope.go:117] "RemoveContainer" containerID="4835534f30a015cf0222ec9f33e3a80122b221da1e2c28ad5173ebefb907337c"
	Oct 06 19:51:59 old-k8s-version-100545 kubelet[775]: E1006 19:51:59.543601     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-k25bq_kubernetes-dashboard(ba0aac8a-6f26-492c-859a-fbc86c90eb65)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-k25bq" podUID="ba0aac8a-6f26-492c-859a-fbc86c90eb65"
	Oct 06 19:52:05 old-k8s-version-100545 kubelet[775]: I1006 19:52:05.571685     775 scope.go:117] "RemoveContainer" containerID="0b711f9a9459848bc688692b203cbbb20fe8f51db6e39706fde7b2385746a690"
	Oct 06 19:52:13 old-k8s-version-100545 kubelet[775]: I1006 19:52:13.370061     775 scope.go:117] "RemoveContainer" containerID="4835534f30a015cf0222ec9f33e3a80122b221da1e2c28ad5173ebefb907337c"
	Oct 06 19:52:13 old-k8s-version-100545 kubelet[775]: I1006 19:52:13.593485     775 scope.go:117] "RemoveContainer" containerID="4835534f30a015cf0222ec9f33e3a80122b221da1e2c28ad5173ebefb907337c"
	Oct 06 19:52:13 old-k8s-version-100545 kubelet[775]: I1006 19:52:13.593931     775 scope.go:117] "RemoveContainer" containerID="f8406dc707a80365322f2b9f5d84aec9974cb6a529db4ce3e4077d25f365ebc2"
	Oct 06 19:52:13 old-k8s-version-100545 kubelet[775]: E1006 19:52:13.594291     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-k25bq_kubernetes-dashboard(ba0aac8a-6f26-492c-859a-fbc86c90eb65)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-k25bq" podUID="ba0aac8a-6f26-492c-859a-fbc86c90eb65"
	Oct 06 19:52:17 old-k8s-version-100545 kubelet[775]: I1006 19:52:17.603675     775 scope.go:117] "RemoveContainer" containerID="f8406dc707a80365322f2b9f5d84aec9974cb6a529db4ce3e4077d25f365ebc2"
	Oct 06 19:52:17 old-k8s-version-100545 kubelet[775]: E1006 19:52:17.604002     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-k25bq_kubernetes-dashboard(ba0aac8a-6f26-492c-859a-fbc86c90eb65)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-k25bq" podUID="ba0aac8a-6f26-492c-859a-fbc86c90eb65"
	Oct 06 19:52:19 old-k8s-version-100545 kubelet[775]: I1006 19:52:19.467789     775 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 06 19:52:19 old-k8s-version-100545 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 06 19:52:19 old-k8s-version-100545 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 06 19:52:19 old-k8s-version-100545 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [ee2e449a1dbc6e891f301528edaf5f304cc37cb287756a88bd15304dfbe66e59] <==
	2025/10/06 19:51:52 Using namespace: kubernetes-dashboard
	2025/10/06 19:51:52 Using in-cluster config to connect to apiserver
	2025/10/06 19:51:52 Using secret token for csrf signing
	2025/10/06 19:51:52 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/06 19:51:52 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/06 19:51:52 Successful initial request to the apiserver, version: v1.28.0
	2025/10/06 19:51:52 Generating JWE encryption key
	2025/10/06 19:51:52 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/06 19:51:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/06 19:51:52 Initializing JWE encryption key from synchronized object
	2025/10/06 19:51:52 Creating in-cluster Sidecar client
	2025/10/06 19:51:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/06 19:51:52 Serving insecurely on HTTP port: 9090
	2025/10/06 19:52:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/06 19:51:52 Starting overwatch
	
	
	==> storage-provisioner [0b711f9a9459848bc688692b203cbbb20fe8f51db6e39706fde7b2385746a690] <==
	I1006 19:51:35.526165       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1006 19:52:05.549128       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fe27300a0ccb2d60aa2a92ac42db7ebc34ff4c80d2bd745b7157749d807ccd59] <==
	I1006 19:52:05.634594       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1006 19:52:05.648006       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1006 19:52:05.648679       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1006 19:52:23.061506       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1006 19:52:23.061762       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-100545_e72cbd91-d0ba-4937-b295-395cc8b7ab59!
	I1006 19:52:23.062047       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"411b65f9-f70e-49ad-ba6c-89d8933e38af", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-100545_e72cbd91-d0ba-4937-b295-395cc8b7ab59 became leader
	I1006 19:52:23.164757       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-100545_e72cbd91-d0ba-4937-b295-395cc8b7ab59!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-100545 -n old-k8s-version-100545
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-100545 -n old-k8s-version-100545: exit status 2 (350.024966ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-100545 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-100545
helpers_test.go:243: (dbg) docker inspect old-k8s-version-100545:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "44567b8f0b33f8cfbca0e390407ff65fe565a80973c93328cbb178c5ca076b3b",
	        "Created": "2025-10-06T19:50:04.309020012Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 188293,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T19:51:20.978085574Z",
	            "FinishedAt": "2025-10-06T19:51:20.121749444Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/44567b8f0b33f8cfbca0e390407ff65fe565a80973c93328cbb178c5ca076b3b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/44567b8f0b33f8cfbca0e390407ff65fe565a80973c93328cbb178c5ca076b3b/hostname",
	        "HostsPath": "/var/lib/docker/containers/44567b8f0b33f8cfbca0e390407ff65fe565a80973c93328cbb178c5ca076b3b/hosts",
	        "LogPath": "/var/lib/docker/containers/44567b8f0b33f8cfbca0e390407ff65fe565a80973c93328cbb178c5ca076b3b/44567b8f0b33f8cfbca0e390407ff65fe565a80973c93328cbb178c5ca076b3b-json.log",
	        "Name": "/old-k8s-version-100545",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-100545:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-100545",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "44567b8f0b33f8cfbca0e390407ff65fe565a80973c93328cbb178c5ca076b3b",
	                "LowerDir": "/var/lib/docker/overlay2/7739f5fcc80275a40519b7d8366e72b83c80e8b92ef511dd23fbdc4e3cd32dd2-init/diff:/var/lib/docker/overlay2/950dad8a7486c25883a9845454dded3949e8f3a53166d005e6749ff210bcab80/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7739f5fcc80275a40519b7d8366e72b83c80e8b92ef511dd23fbdc4e3cd32dd2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7739f5fcc80275a40519b7d8366e72b83c80e8b92ef511dd23fbdc4e3cd32dd2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7739f5fcc80275a40519b7d8366e72b83c80e8b92ef511dd23fbdc4e3cd32dd2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-100545",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-100545/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-100545",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-100545",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-100545",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "19a11cc0ec1dbf538919e954e81c26f2daca1fc39a4d9eaa88e6b1484b102b48",
	            "SandboxKey": "/var/run/docker/netns/19a11cc0ec1d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-100545": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:07:94:ac:27:56",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "70390eeacb58521b859ee9aa701da0b462d8cfbec3301aa774d326d82c9a1e6e",
	                    "EndpointID": "104306678ed14a41ffe90e6b78fc488e70c07196798f74f6bc97253bdee25926",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-100545",
	                        "44567b8f0b33"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-100545 -n old-k8s-version-100545
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-100545 -n old-k8s-version-100545: exit status 2 (367.785669ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-100545 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-100545 logs -n 25: (1.255265293s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-053944 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo containerd config dump                                                                                                                                                                                                  │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ -p cilium-053944 sudo crio config                                                                                                                                                                                                             │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ delete  │ -p cilium-053944                                                                                                                                                                                                                              │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │ 06 Oct 25 19:40 UTC │
	│ start   │ -p force-systemd-env-760371 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-760371  │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ force-systemd-flag-203169 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-203169 │ jenkins │ v1.37.0 │ 06 Oct 25 19:47 UTC │ 06 Oct 25 19:47 UTC │
	│ delete  │ -p force-systemd-flag-203169                                                                                                                                                                                                                  │ force-systemd-flag-203169 │ jenkins │ v1.37.0 │ 06 Oct 25 19:47 UTC │ 06 Oct 25 19:47 UTC │
	│ start   │ -p cert-expiration-585086 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-585086    │ jenkins │ v1.37.0 │ 06 Oct 25 19:47 UTC │ 06 Oct 25 19:48 UTC │
	│ delete  │ -p force-systemd-env-760371                                                                                                                                                                                                                   │ force-systemd-env-760371  │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ start   │ -p cert-options-593131 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-593131       │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ ssh     │ cert-options-593131 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-593131       │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ ssh     │ -p cert-options-593131 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-593131       │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ delete  │ -p cert-options-593131                                                                                                                                                                                                                        │ cert-options-593131       │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ start   │ -p old-k8s-version-100545 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:50 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-100545 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │                     │
	│ stop    │ -p old-k8s-version-100545 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │ 06 Oct 25 19:51 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-100545 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │ 06 Oct 25 19:51 UTC │
	│ start   │ -p old-k8s-version-100545 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │ 06 Oct 25 19:52 UTC │
	│ start   │ -p cert-expiration-585086 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-585086    │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │                     │
	│ image   │ old-k8s-version-100545 image list --format=json                                                                                                                                                                                               │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:52 UTC │
	│ pause   │ -p old-k8s-version-100545 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 19:51:25
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 19:51:25.966297  188975 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:51:25.966407  188975 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:51:25.966410  188975 out.go:374] Setting ErrFile to fd 2...
	I1006 19:51:25.966422  188975 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:51:25.966702  188975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:51:25.967057  188975 out.go:368] Setting JSON to false
	I1006 19:51:25.968062  188975 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5621,"bootTime":1759774665,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1006 19:51:25.968123  188975 start.go:140] virtualization:  
	I1006 19:51:25.971494  188975 out.go:179] * [cert-expiration-585086] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 19:51:25.975487  188975 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 19:51:25.975605  188975 notify.go:220] Checking for updates...
	I1006 19:51:25.981299  188975 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 19:51:25.984387  188975 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:51:25.987307  188975 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	I1006 19:51:25.991645  188975 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 19:51:25.994873  188975 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 19:51:25.998294  188975 config.go:182] Loaded profile config "cert-expiration-585086": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:51:25.998846  188975 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 19:51:26.035673  188975 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 19:51:26.035796  188975 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:51:26.116278  188975 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-06 19:51:26.100853485 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:51:26.116403  188975 docker.go:318] overlay module found
	I1006 19:51:26.119613  188975 out.go:179] * Using the docker driver based on existing profile
	I1006 19:51:26.122575  188975 start.go:304] selected driver: docker
	I1006 19:51:26.122586  188975 start.go:924] validating driver "docker" against &{Name:cert-expiration-585086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-585086 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:51:26.122681  188975 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 19:51:26.123492  188975 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:51:26.224405  188975 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-06 19:51:26.19342503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:51:26.224692  188975 cni.go:84] Creating CNI manager for ""
	I1006 19:51:26.224748  188975 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:51:26.224786  188975 start.go:348] cluster config:
	{Name:cert-expiration-585086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-585086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1006 19:51:26.228093  188975 out.go:179] * Starting "cert-expiration-585086" primary control-plane node in "cert-expiration-585086" cluster
	I1006 19:51:26.231102  188975 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 19:51:26.233909  188975 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 19:51:26.236856  188975 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:51:26.236904  188975 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1006 19:51:26.236912  188975 cache.go:58] Caching tarball of preloaded images
	I1006 19:51:26.236991  188975 preload.go:233] Found /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1006 19:51:26.236998  188975 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 19:51:26.237112  188975 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/cert-expiration-585086/config.json ...
	I1006 19:51:26.237344  188975 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 19:51:26.266378  188975 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 19:51:26.266390  188975 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 19:51:26.266424  188975 cache.go:232] Successfully downloaded all kic artifacts
	I1006 19:51:26.266453  188975 start.go:360] acquireMachinesLock for cert-expiration-585086: {Name:mkfbc592fc0fdee897fdcca1ec0865b663d6035c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 19:51:26.266540  188975 start.go:364] duration metric: took 51.02µs to acquireMachinesLock for "cert-expiration-585086"
	I1006 19:51:26.266559  188975 start.go:96] Skipping create...Using existing machine configuration
	I1006 19:51:26.266567  188975 fix.go:54] fixHost starting: 
	I1006 19:51:26.266862  188975 cli_runner.go:164] Run: docker container inspect cert-expiration-585086 --format={{.State.Status}}
	I1006 19:51:26.292168  188975 fix.go:112] recreateIfNeeded on cert-expiration-585086: state=Running err=<nil>
	W1006 19:51:26.292187  188975 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 19:51:25.699691  188165 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 19:51:25.703226  188165 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 19:51:25.703301  188165 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 19:51:25.703318  188165 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/addons for local assets ...
	I1006 19:51:25.703375  188165 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/files for local assets ...
	I1006 19:51:25.703471  188165 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> 43502.pem in /etc/ssl/certs
	I1006 19:51:25.703585  188165 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 19:51:25.711264  188165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:51:25.728800  188165 start.go:296] duration metric: took 151.303877ms for postStartSetup
	I1006 19:51:25.728882  188165 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 19:51:25.728924  188165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-100545
	I1006 19:51:25.746639  188165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/old-k8s-version-100545/id_rsa Username:docker}
	I1006 19:51:25.840709  188165 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 19:51:25.845369  188165 fix.go:56] duration metric: took 4.92360908s for fixHost
	I1006 19:51:25.845391  188165 start.go:83] releasing machines lock for "old-k8s-version-100545", held for 4.923660051s
	I1006 19:51:25.845461  188165 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-100545
	I1006 19:51:25.862329  188165 ssh_runner.go:195] Run: cat /version.json
	I1006 19:51:25.862368  188165 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 19:51:25.862392  188165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-100545
	I1006 19:51:25.862433  188165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-100545
	I1006 19:51:25.880031  188165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/old-k8s-version-100545/id_rsa Username:docker}
	I1006 19:51:25.885458  188165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/old-k8s-version-100545/id_rsa Username:docker}
	I1006 19:51:26.091917  188165 ssh_runner.go:195] Run: systemctl --version
	I1006 19:51:26.099191  188165 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 19:51:26.175086  188165 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 19:51:26.182457  188165 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 19:51:26.182558  188165 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 19:51:26.197017  188165 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 19:51:26.197038  188165 start.go:495] detecting cgroup driver to use...
	I1006 19:51:26.197096  188165 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 19:51:26.197153  188165 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 19:51:26.216059  188165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 19:51:26.233076  188165 docker.go:218] disabling cri-docker service (if available) ...
	I1006 19:51:26.233138  188165 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 19:51:26.259495  188165 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 19:51:26.287079  188165 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 19:51:26.441792  188165 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 19:51:26.601612  188165 docker.go:234] disabling docker service ...
	I1006 19:51:26.601692  188165 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 19:51:26.618462  188165 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 19:51:26.641188  188165 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 19:51:26.821293  188165 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 19:51:26.975558  188165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 19:51:26.995945  188165 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 19:51:27.013046  188165 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1006 19:51:27.013128  188165 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:51:27.024581  188165 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 19:51:27.024647  188165 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:51:27.034400  188165 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:51:27.044317  188165 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:51:27.054398  188165 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 19:51:27.063447  188165 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:51:27.073356  188165 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:51:27.085150  188165 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:51:27.094564  188165 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 19:51:27.102877  188165 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 19:51:27.111803  188165 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:51:27.267175  188165 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 19:51:27.481454  188165 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 19:51:27.481580  188165 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 19:51:27.485833  188165 start.go:563] Will wait 60s for crictl version
	I1006 19:51:27.485902  188165 ssh_runner.go:195] Run: which crictl
	I1006 19:51:27.489841  188165 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 19:51:27.524574  188165 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 19:51:27.524653  188165 ssh_runner.go:195] Run: crio --version
	I1006 19:51:27.555683  188165 ssh_runner.go:195] Run: crio --version
	I1006 19:51:27.592189  188165 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1006 19:51:27.594928  188165 cli_runner.go:164] Run: docker network inspect old-k8s-version-100545 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:51:27.618374  188165 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1006 19:51:27.622648  188165 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:51:27.632124  188165 kubeadm.go:883] updating cluster {Name:old-k8s-version-100545 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-100545 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 19:51:27.632224  188165 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1006 19:51:27.632274  188165 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:51:27.669743  188165 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:51:27.669763  188165 crio.go:433] Images already preloaded, skipping extraction
	I1006 19:51:27.669822  188165 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:51:27.709931  188165 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:51:27.709952  188165 cache_images.go:85] Images are preloaded, skipping loading
	I1006 19:51:27.709959  188165 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1006 19:51:27.710056  188165 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-100545 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-100545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 19:51:27.710134  188165 ssh_runner.go:195] Run: crio config
	I1006 19:51:27.786884  188165 cni.go:84] Creating CNI manager for ""
	I1006 19:51:27.786904  188165 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:51:27.786923  188165 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 19:51:27.786944  188165 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-100545 NodeName:old-k8s-version-100545 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 19:51:27.787112  188165 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-100545"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 19:51:27.787196  188165 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1006 19:51:27.796868  188165 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 19:51:27.796947  188165 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 19:51:27.812859  188165 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1006 19:51:27.829442  188165 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 19:51:27.841927  188165 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1006 19:51:27.854876  188165 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1006 19:51:27.858886  188165 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:51:27.868862  188165 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:51:28.039917  188165 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:51:28.060912  188165 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545 for IP: 192.168.76.2
	I1006 19:51:28.060932  188165 certs.go:195] generating shared ca certs ...
	I1006 19:51:28.060949  188165 certs.go:227] acquiring lock for ca certs: {Name:mke29bef1b13829052576090d66d8864f7cbc64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:51:28.061092  188165 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key
	I1006 19:51:28.061147  188165 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key
	I1006 19:51:28.061155  188165 certs.go:257] generating profile certs ...
	I1006 19:51:28.061252  188165 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/client.key
	I1006 19:51:28.061312  188165 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/apiserver.key.139d205a
	I1006 19:51:28.061353  188165 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/proxy-client.key
	I1006 19:51:28.061474  188165 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem (1338 bytes)
	W1006 19:51:28.061500  188165 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350_empty.pem, impossibly tiny 0 bytes
	I1006 19:51:28.061509  188165 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 19:51:28.061537  188165 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem (1082 bytes)
	I1006 19:51:28.061559  188165 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem (1123 bytes)
	I1006 19:51:28.061581  188165 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem (1675 bytes)
	I1006 19:51:28.061624  188165 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:51:28.062255  188165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 19:51:28.108588  188165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 19:51:28.130437  188165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 19:51:28.151067  188165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 19:51:28.195671  188165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1006 19:51:28.253549  188165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1006 19:51:28.325078  188165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 19:51:28.361515  188165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 19:51:28.405500  188165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /usr/share/ca-certificates/43502.pem (1708 bytes)
	I1006 19:51:28.426030  188165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 19:51:28.445548  188165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem --> /usr/share/ca-certificates/4350.pem (1338 bytes)
	I1006 19:51:28.466166  188165 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 19:51:28.497423  188165 ssh_runner.go:195] Run: openssl version
	I1006 19:51:28.508934  188165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43502.pem && ln -fs /usr/share/ca-certificates/43502.pem /etc/ssl/certs/43502.pem"
	I1006 19:51:28.525269  188165 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43502.pem
	I1006 19:51:28.530243  188165 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 18:49 /usr/share/ca-certificates/43502.pem
	I1006 19:51:28.530311  188165 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43502.pem
	I1006 19:51:28.572481  188165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43502.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 19:51:28.585036  188165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 19:51:28.596929  188165 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:51:28.602646  188165 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:51:28.602710  188165 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:51:28.646263  188165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 19:51:28.655104  188165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4350.pem && ln -fs /usr/share/ca-certificates/4350.pem /etc/ssl/certs/4350.pem"
	I1006 19:51:28.664775  188165 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4350.pem
	I1006 19:51:28.669081  188165 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 18:49 /usr/share/ca-certificates/4350.pem
	I1006 19:51:28.669164  188165 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4350.pem
	I1006 19:51:28.710348  188165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4350.pem /etc/ssl/certs/51391683.0"
	I1006 19:51:28.718695  188165 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 19:51:28.722911  188165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 19:51:28.767677  188165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 19:51:28.814960  188165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 19:51:28.861374  188165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 19:51:28.926939  188165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 19:51:28.996112  188165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 19:51:29.078240  188165 kubeadm.go:400] StartCluster: {Name:old-k8s-version-100545 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-100545 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:51:29.078342  188165 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 19:51:29.078476  188165 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 19:51:29.122680  188165 cri.go:89] found id: "8c6661172ed70b90aa4e68f8c6df968c5115fa2d90d59204af5eb1097fa7726a"
	I1006 19:51:29.122705  188165 cri.go:89] found id: "ee419682aebe0e3fd692c174ff777858ffe04bf8c1a05f5717da689629decf7a"
	I1006 19:51:29.122720  188165 cri.go:89] found id: "a18305a0a761844f2ffb53c14f0baa5064d0e47665887eddacbafe1fa1fc5fc1"
	I1006 19:51:29.122724  188165 cri.go:89] found id: "e5a93ede6956ec8f39200e0ca417de2ddcc0803039e26b14fc6c083f2a84d73c"
	I1006 19:51:29.122760  188165 cri.go:89] found id: ""
	I1006 19:51:29.122844  188165 ssh_runner.go:195] Run: sudo runc list -f json
	W1006 19:51:29.140556  188165 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:51:29Z" level=error msg="open /run/runc: no such file or directory"
	I1006 19:51:29.140666  188165 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 19:51:29.154169  188165 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 19:51:29.154189  188165 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 19:51:29.154273  188165 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 19:51:29.161632  188165 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 19:51:29.162269  188165 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-100545" does not appear in /home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:51:29.162573  188165 kubeconfig.go:62] /home/jenkins/minikube-integration/21701-2540/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-100545" cluster setting kubeconfig missing "old-k8s-version-100545" context setting]
	I1006 19:51:29.163043  188165 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/kubeconfig: {Name:mkf8cce8f9dace5c4d41967d6ad8df5e03ba53f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:51:29.164686  188165 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 19:51:29.178347  188165 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1006 19:51:29.178383  188165 kubeadm.go:601] duration metric: took 24.187657ms to restartPrimaryControlPlane
	I1006 19:51:29.178393  188165 kubeadm.go:402] duration metric: took 100.163656ms to StartCluster
	I1006 19:51:29.178440  188165 settings.go:142] acquiring lock: {Name:mkaade96c66593c653256adaeb0a3029ca60e0c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:51:29.178545  188165 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:51:29.179589  188165 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/kubeconfig: {Name:mkf8cce8f9dace5c4d41967d6ad8df5e03ba53f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:51:29.179863  188165 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 19:51:29.180225  188165 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 19:51:29.180299  188165 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-100545"
	I1006 19:51:29.180317  188165 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-100545"
	W1006 19:51:29.180327  188165 addons.go:247] addon storage-provisioner should already be in state true
	I1006 19:51:29.180354  188165 host.go:66] Checking if "old-k8s-version-100545" exists ...
	I1006 19:51:29.180847  188165 cli_runner.go:164] Run: docker container inspect old-k8s-version-100545 --format={{.State.Status}}
	I1006 19:51:29.181288  188165 config.go:182] Loaded profile config "old-k8s-version-100545": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1006 19:51:29.181381  188165 addons.go:69] Setting dashboard=true in profile "old-k8s-version-100545"
	I1006 19:51:29.181397  188165 addons.go:238] Setting addon dashboard=true in "old-k8s-version-100545"
	W1006 19:51:29.181425  188165 addons.go:247] addon dashboard should already be in state true
	I1006 19:51:29.181462  188165 host.go:66] Checking if "old-k8s-version-100545" exists ...
	I1006 19:51:29.181938  188165 cli_runner.go:164] Run: docker container inspect old-k8s-version-100545 --format={{.State.Status}}
	I1006 19:51:29.184446  188165 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-100545"
	I1006 19:51:29.184661  188165 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-100545"
	I1006 19:51:29.184982  188165 cli_runner.go:164] Run: docker container inspect old-k8s-version-100545 --format={{.State.Status}}
	I1006 19:51:29.184641  188165 out.go:179] * Verifying Kubernetes components...
	I1006 19:51:29.190692  188165 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:51:29.233294  188165 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1006 19:51:29.238407  188165 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1006 19:51:29.239555  188165 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 19:51:29.242820  188165 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1006 19:51:29.242848  188165 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1006 19:51:29.242922  188165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-100545
	I1006 19:51:29.244776  188165 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 19:51:29.244797  188165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 19:51:29.244862  188165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-100545
	I1006 19:51:29.254614  188165 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-100545"
	W1006 19:51:29.254647  188165 addons.go:247] addon default-storageclass should already be in state true
	I1006 19:51:29.254681  188165 host.go:66] Checking if "old-k8s-version-100545" exists ...
	I1006 19:51:29.255129  188165 cli_runner.go:164] Run: docker container inspect old-k8s-version-100545 --format={{.State.Status}}
	I1006 19:51:29.285147  188165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/old-k8s-version-100545/id_rsa Username:docker}
	I1006 19:51:29.307813  188165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/old-k8s-version-100545/id_rsa Username:docker}
	I1006 19:51:29.326976  188165 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 19:51:29.327003  188165 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 19:51:29.327067  188165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-100545
	I1006 19:51:29.357631  188165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/old-k8s-version-100545/id_rsa Username:docker}
	I1006 19:51:29.526605  188165 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:51:29.558332  188165 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1006 19:51:29.558358  188165 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1006 19:51:29.560651  188165 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 19:51:29.564383  188165 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 19:51:29.588946  188165 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-100545" to be "Ready" ...
	I1006 19:51:29.606406  188165 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1006 19:51:29.606428  188165 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1006 19:51:29.697449  188165 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1006 19:51:29.697470  188165 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1006 19:51:29.773941  188165 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1006 19:51:29.773960  188165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1006 19:51:29.837505  188165 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1006 19:51:29.837524  188165 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1006 19:51:29.861571  188165 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1006 19:51:29.861638  188165 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1006 19:51:29.884325  188165 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1006 19:51:29.884396  188165 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1006 19:51:29.908833  188165 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1006 19:51:29.908908  188165 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1006 19:51:29.930167  188165 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1006 19:51:29.930229  188165 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1006 19:51:29.952406  188165 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1006 19:51:26.295527  188975 out.go:252] * Updating the running docker "cert-expiration-585086" container ...
	I1006 19:51:26.295554  188975 machine.go:93] provisionDockerMachine start ...
	I1006 19:51:26.295639  188975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-585086
	I1006 19:51:26.314924  188975 main.go:141] libmachine: Using SSH client type: native
	I1006 19:51:26.315225  188975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33040 <nil> <nil>}
	I1006 19:51:26.315234  188975 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 19:51:26.467539  188975 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-585086
	
	I1006 19:51:26.467552  188975 ubuntu.go:182] provisioning hostname "cert-expiration-585086"
	I1006 19:51:26.467612  188975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-585086
	I1006 19:51:26.513405  188975 main.go:141] libmachine: Using SSH client type: native
	I1006 19:51:26.513706  188975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33040 <nil> <nil>}
	I1006 19:51:26.513715  188975 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-585086 && echo "cert-expiration-585086" | sudo tee /etc/hostname
	I1006 19:51:26.706988  188975 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-585086
	
	I1006 19:51:26.707055  188975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-585086
	I1006 19:51:26.730667  188975 main.go:141] libmachine: Using SSH client type: native
	I1006 19:51:26.730976  188975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33040 <nil> <nil>}
	I1006 19:51:26.730990  188975 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-585086' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-585086/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-585086' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 19:51:26.880480  188975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 19:51:26.880499  188975 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-2540/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-2540/.minikube}
	I1006 19:51:26.880527  188975 ubuntu.go:190] setting up certificates
	I1006 19:51:26.880536  188975 provision.go:84] configureAuth start
	I1006 19:51:26.880608  188975 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-585086
	I1006 19:51:26.901274  188975 provision.go:143] copyHostCerts
	I1006 19:51:26.901327  188975 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem, removing ...
	I1006 19:51:26.901341  188975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem
	I1006 19:51:26.901404  188975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem (1082 bytes)
	I1006 19:51:26.901499  188975 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem, removing ...
	I1006 19:51:26.901503  188975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem
	I1006 19:51:26.901523  188975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem (1123 bytes)
	I1006 19:51:26.901568  188975 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem, removing ...
	I1006 19:51:26.901571  188975 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem
	I1006 19:51:26.901589  188975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem (1675 bytes)
	I1006 19:51:26.901629  188975 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-585086 san=[127.0.0.1 192.168.85.2 cert-expiration-585086 localhost minikube]
	I1006 19:51:28.035060  188975 provision.go:177] copyRemoteCerts
	I1006 19:51:28.035121  188975 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 19:51:28.035161  188975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-585086
	I1006 19:51:28.057739  188975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33040 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/cert-expiration-585086/id_rsa Username:docker}
	I1006 19:51:28.168800  188975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 19:51:28.204751  188975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1006 19:51:28.229212  188975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 19:51:28.273191  188975 provision.go:87] duration metric: took 1.392634966s to configureAuth
	I1006 19:51:28.273209  188975 ubuntu.go:206] setting minikube options for container-runtime
	I1006 19:51:28.273394  188975 config.go:182] Loaded profile config "cert-expiration-585086": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:51:28.273496  188975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-585086
	I1006 19:51:28.297385  188975 main.go:141] libmachine: Using SSH client type: native
	I1006 19:51:28.297675  188975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33040 <nil> <nil>}
	I1006 19:51:28.297695  188975 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 19:51:33.902704  188165 node_ready.go:49] node "old-k8s-version-100545" is "Ready"
	I1006 19:51:33.902732  188165 node_ready.go:38] duration metric: took 4.313734691s for node "old-k8s-version-100545" to be "Ready" ...
	I1006 19:51:33.902745  188165 api_server.go:52] waiting for apiserver process to appear ...
	I1006 19:51:33.902800  188165 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 19:51:33.752611  188975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 19:51:33.752634  188975 machine.go:96] duration metric: took 7.457073304s to provisionDockerMachine
	I1006 19:51:33.752643  188975 start.go:293] postStartSetup for "cert-expiration-585086" (driver="docker")
	I1006 19:51:33.752652  188975 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 19:51:33.752710  188975 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 19:51:33.752757  188975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-585086
	I1006 19:51:33.780885  188975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33040 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/cert-expiration-585086/id_rsa Username:docker}
	I1006 19:51:33.901305  188975 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 19:51:33.908286  188975 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 19:51:33.908313  188975 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 19:51:33.908322  188975 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/addons for local assets ...
	I1006 19:51:33.908385  188975 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/files for local assets ...
	I1006 19:51:33.908471  188975 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> 43502.pem in /etc/ssl/certs
	I1006 19:51:33.908585  188975 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 19:51:33.916770  188975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:51:33.963175  188975 start.go:296] duration metric: took 210.517651ms for postStartSetup
	I1006 19:51:33.963262  188975 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 19:51:33.963309  188975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-585086
	I1006 19:51:33.995970  188975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33040 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/cert-expiration-585086/id_rsa Username:docker}
	I1006 19:51:34.105066  188975 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 19:51:34.111096  188975 fix.go:56] duration metric: took 7.844525861s for fixHost
	I1006 19:51:34.111110  188975 start.go:83] releasing machines lock for "cert-expiration-585086", held for 7.844563072s
	I1006 19:51:34.111176  188975 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-585086
	I1006 19:51:34.135932  188975 ssh_runner.go:195] Run: cat /version.json
	I1006 19:51:34.135993  188975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-585086
	I1006 19:51:34.136160  188975 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 19:51:34.137297  188975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-585086
	I1006 19:51:34.185979  188975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33040 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/cert-expiration-585086/id_rsa Username:docker}
	I1006 19:51:34.188605  188975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33040 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/cert-expiration-585086/id_rsa Username:docker}
	I1006 19:51:34.320599  188975 ssh_runner.go:195] Run: systemctl --version
	I1006 19:51:34.418552  188975 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 19:51:34.497919  188975 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 19:51:34.502680  188975 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 19:51:34.502743  188975 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 19:51:34.519080  188975 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 19:51:34.519094  188975 start.go:495] detecting cgroup driver to use...
	I1006 19:51:34.519138  188975 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 19:51:34.519206  188975 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 19:51:34.537055  188975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 19:51:34.555313  188975 docker.go:218] disabling cri-docker service (if available) ...
	I1006 19:51:34.555386  188975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 19:51:34.576295  188975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 19:51:34.591974  188975 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 19:51:34.867943  188975 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 19:51:35.160838  188975 docker.go:234] disabling docker service ...
	I1006 19:51:35.160927  188975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 19:51:35.186692  188975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 19:51:35.209938  188975 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 19:51:35.467190  188975 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 19:51:35.699182  188975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 19:51:35.739143  188975 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 19:51:35.776234  188975 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 19:51:35.776330  188975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:51:35.797110  188975 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 19:51:35.797193  188975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:51:35.822458  188975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:51:35.840193  188975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:51:35.859324  188975 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 19:51:35.879908  188975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:51:35.901666  188975 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:51:35.917405  188975 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:51:35.936644  188975 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 19:51:35.949429  188975 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 19:51:35.925795  188165 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.365101571s)
	I1006 19:51:36.720356  188165 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.155937694s)
	I1006 19:51:37.285920  188165 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.333434008s)
	I1006 19:51:37.285972  188165 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.383155882s)
	I1006 19:51:37.286015  188165 api_server.go:72] duration metric: took 8.106124772s to wait for apiserver process to appear ...
	I1006 19:51:37.286022  188165 api_server.go:88] waiting for apiserver healthz status ...
	I1006 19:51:37.286039  188165 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1006 19:51:37.289030  188165 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-100545 addons enable metrics-server
	
	I1006 19:51:37.292034  188165 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1006 19:51:37.294913  188165 addons.go:514] duration metric: took 8.114678455s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1006 19:51:37.296055  188165 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1006 19:51:37.297451  188165 api_server.go:141] control plane version: v1.28.0
	I1006 19:51:37.297474  188165 api_server.go:131] duration metric: took 11.446114ms to wait for apiserver health ...
	I1006 19:51:37.297492  188165 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 19:51:37.301146  188165 system_pods.go:59] 8 kube-system pods found
	I1006 19:51:37.301184  188165 system_pods.go:61] "coredns-5dd5756b68-pbzhb" [2a53f16d-7f31-4ccf-a6e8-e0b485d40c7a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:51:37.301197  188165 system_pods.go:61] "etcd-old-k8s-version-100545" [4bbefd5f-0657-4d3b-bb1f-4fa90c467916] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 19:51:37.301203  188165 system_pods.go:61] "kindnet-l292c" [b6fbb8a4-b9fa-43dd-aaa9-9657f706d606] Running
	I1006 19:51:37.301211  188165 system_pods.go:61] "kube-apiserver-old-k8s-version-100545" [a7ef742d-d559-46f7-93da-a4dfb491f13d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 19:51:37.301223  188165 system_pods.go:61] "kube-controller-manager-old-k8s-version-100545" [ec8c9c46-e338-4387-a86c-51524b77947f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 19:51:37.301237  188165 system_pods.go:61] "kube-proxy-h4bcn" [1dbb2b48-8c5b-478f-aa23-2ae9bbfef06e] Running
	I1006 19:51:37.301243  188165 system_pods.go:61] "kube-scheduler-old-k8s-version-100545" [78fe852c-4e6a-4964-b900-fdf5da9290b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 19:51:37.301248  188165 system_pods.go:61] "storage-provisioner" [ce45de57-885f-44c7-8bc3-19d8c43b20b8] Running
	I1006 19:51:37.301254  188165 system_pods.go:74] duration metric: took 3.757043ms to wait for pod list to return data ...
	I1006 19:51:37.301266  188165 default_sa.go:34] waiting for default service account to be created ...
	I1006 19:51:37.303762  188165 default_sa.go:45] found service account: "default"
	I1006 19:51:37.303787  188165 default_sa.go:55] duration metric: took 2.514774ms for default service account to be created ...
	I1006 19:51:37.303798  188165 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 19:51:37.307188  188165 system_pods.go:86] 8 kube-system pods found
	I1006 19:51:37.307219  188165 system_pods.go:89] "coredns-5dd5756b68-pbzhb" [2a53f16d-7f31-4ccf-a6e8-e0b485d40c7a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:51:37.307229  188165 system_pods.go:89] "etcd-old-k8s-version-100545" [4bbefd5f-0657-4d3b-bb1f-4fa90c467916] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 19:51:37.307236  188165 system_pods.go:89] "kindnet-l292c" [b6fbb8a4-b9fa-43dd-aaa9-9657f706d606] Running
	I1006 19:51:37.307243  188165 system_pods.go:89] "kube-apiserver-old-k8s-version-100545" [a7ef742d-d559-46f7-93da-a4dfb491f13d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 19:51:37.307249  188165 system_pods.go:89] "kube-controller-manager-old-k8s-version-100545" [ec8c9c46-e338-4387-a86c-51524b77947f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 19:51:37.307260  188165 system_pods.go:89] "kube-proxy-h4bcn" [1dbb2b48-8c5b-478f-aa23-2ae9bbfef06e] Running
	I1006 19:51:37.307266  188165 system_pods.go:89] "kube-scheduler-old-k8s-version-100545" [78fe852c-4e6a-4964-b900-fdf5da9290b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 19:51:37.307273  188165 system_pods.go:89] "storage-provisioner" [ce45de57-885f-44c7-8bc3-19d8c43b20b8] Running
	I1006 19:51:37.307280  188165 system_pods.go:126] duration metric: took 3.477292ms to wait for k8s-apps to be running ...
	I1006 19:51:37.307291  188165 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 19:51:37.307347  188165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:51:37.321458  188165 system_svc.go:56] duration metric: took 14.158717ms WaitForService to wait for kubelet
	I1006 19:51:37.321488  188165 kubeadm.go:586] duration metric: took 8.141595284s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 19:51:37.321508  188165 node_conditions.go:102] verifying NodePressure condition ...
	I1006 19:51:37.324824  188165 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 19:51:37.324862  188165 node_conditions.go:123] node cpu capacity is 2
	I1006 19:51:37.324875  188165 node_conditions.go:105] duration metric: took 3.361279ms to run NodePressure ...
	I1006 19:51:37.324888  188165 start.go:241] waiting for startup goroutines ...
	I1006 19:51:37.324895  188165 start.go:246] waiting for cluster config update ...
	I1006 19:51:37.324911  188165 start.go:255] writing updated cluster config ...
	I1006 19:51:37.325218  188165 ssh_runner.go:195] Run: rm -f paused
	I1006 19:51:37.328803  188165 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 19:51:37.333457  188165 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-pbzhb" in "kube-system" namespace to be "Ready" or be gone ...
	W1006 19:51:39.339445  188165 pod_ready.go:104] pod "coredns-5dd5756b68-pbzhb" is not "Ready", error: <nil>
	I1006 19:51:35.968487  188975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:51:36.436858  188975 ssh_runner.go:195] Run: sudo systemctl restart crio
	W1006 19:51:41.839179  188165 pod_ready.go:104] pod "coredns-5dd5756b68-pbzhb" is not "Ready", error: <nil>
	W1006 19:51:43.840191  188165 pod_ready.go:104] pod "coredns-5dd5756b68-pbzhb" is not "Ready", error: <nil>
	W1006 19:51:46.339407  188165 pod_ready.go:104] pod "coredns-5dd5756b68-pbzhb" is not "Ready", error: <nil>
	W1006 19:51:48.339937  188165 pod_ready.go:104] pod "coredns-5dd5756b68-pbzhb" is not "Ready", error: <nil>
	W1006 19:51:50.839410  188165 pod_ready.go:104] pod "coredns-5dd5756b68-pbzhb" is not "Ready", error: <nil>
	W1006 19:51:52.840370  188165 pod_ready.go:104] pod "coredns-5dd5756b68-pbzhb" is not "Ready", error: <nil>
	W1006 19:51:55.339137  188165 pod_ready.go:104] pod "coredns-5dd5756b68-pbzhb" is not "Ready", error: <nil>
	W1006 19:51:57.339634  188165 pod_ready.go:104] pod "coredns-5dd5756b68-pbzhb" is not "Ready", error: <nil>
	W1006 19:51:59.840290  188165 pod_ready.go:104] pod "coredns-5dd5756b68-pbzhb" is not "Ready", error: <nil>
	W1006 19:52:02.340120  188165 pod_ready.go:104] pod "coredns-5dd5756b68-pbzhb" is not "Ready", error: <nil>
	W1006 19:52:04.840372  188165 pod_ready.go:104] pod "coredns-5dd5756b68-pbzhb" is not "Ready", error: <nil>
	I1006 19:52:05.839561  188165 pod_ready.go:94] pod "coredns-5dd5756b68-pbzhb" is "Ready"
	I1006 19:52:05.839594  188165 pod_ready.go:86] duration metric: took 28.506110175s for pod "coredns-5dd5756b68-pbzhb" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:52:05.843094  188165 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-100545" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:52:05.848029  188165 pod_ready.go:94] pod "etcd-old-k8s-version-100545" is "Ready"
	I1006 19:52:05.848059  188165 pod_ready.go:86] duration metric: took 4.938109ms for pod "etcd-old-k8s-version-100545" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:52:05.851198  188165 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-100545" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:52:05.856373  188165 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-100545" is "Ready"
	I1006 19:52:05.856402  188165 pod_ready.go:86] duration metric: took 5.178138ms for pod "kube-apiserver-old-k8s-version-100545" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:52:05.859529  188165 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-100545" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:52:06.037460  188165 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-100545" is "Ready"
	I1006 19:52:06.037489  188165 pod_ready.go:86] duration metric: took 177.935402ms for pod "kube-controller-manager-old-k8s-version-100545" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:52:06.238541  188165 pod_ready.go:83] waiting for pod "kube-proxy-h4bcn" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:52:06.638200  188165 pod_ready.go:94] pod "kube-proxy-h4bcn" is "Ready"
	I1006 19:52:06.638229  188165 pod_ready.go:86] duration metric: took 399.65948ms for pod "kube-proxy-h4bcn" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:52:06.838232  188165 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-100545" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:52:07.237125  188165 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-100545" is "Ready"
	I1006 19:52:07.237157  188165 pod_ready.go:86] duration metric: took 398.89741ms for pod "kube-scheduler-old-k8s-version-100545" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:52:07.237169  188165 pod_ready.go:40] duration metric: took 29.908335656s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 19:52:07.290601  188165 start.go:623] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1006 19:52:07.293658  188165 out.go:203] 
	W1006 19:52:07.296549  188165 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1006 19:52:07.299426  188165 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1006 19:52:07.302296  188165 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-100545" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 06 19:52:13 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:13.370780912Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1338ace9-a57c-4dc0-972a-97819012bef5 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:52:13 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:13.371685166Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e3fd6a62-363f-41f1-8a87-4a5cd3250a92 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:52:13 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:13.37277334Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-k25bq/dashboard-metrics-scraper" id=5a332012-e3a3-422d-873b-1cbedb7ceb7e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:52:13 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:13.373012441Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:52:13 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:13.380642059Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:52:13 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:13.381345439Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:52:13 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:13.397811054Z" level=info msg="Created container f8406dc707a80365322f2b9f5d84aec9974cb6a529db4ce3e4077d25f365ebc2: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-k25bq/dashboard-metrics-scraper" id=5a332012-e3a3-422d-873b-1cbedb7ceb7e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:52:13 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:13.404012832Z" level=info msg="Starting container: f8406dc707a80365322f2b9f5d84aec9974cb6a529db4ce3e4077d25f365ebc2" id=ed81f972-6096-4731-94a8-e9ef0c75ffd2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 19:52:13 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:13.408219755Z" level=info msg="Started container" PID=1654 containerID=f8406dc707a80365322f2b9f5d84aec9974cb6a529db4ce3e4077d25f365ebc2 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-k25bq/dashboard-metrics-scraper id=ed81f972-6096-4731-94a8-e9ef0c75ffd2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8f76580e7f1233f860e0041de83572c4d021fe969e5975e1b9170fb4eed66b87
	Oct 06 19:52:13 old-k8s-version-100545 conmon[1652]: conmon f8406dc707a80365322f <ninfo>: container 1654 exited with status 1
	Oct 06 19:52:13 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:13.595750257Z" level=info msg="Removing container: 4835534f30a015cf0222ec9f33e3a80122b221da1e2c28ad5173ebefb907337c" id=21bdfb51-3c6e-4937-895d-298cc29a17b3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 06 19:52:13 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:13.606088335Z" level=info msg="Error loading conmon cgroup of container 4835534f30a015cf0222ec9f33e3a80122b221da1e2c28ad5173ebefb907337c: cgroup deleted" id=21bdfb51-3c6e-4937-895d-298cc29a17b3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 06 19:52:13 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:13.611032345Z" level=info msg="Removed container 4835534f30a015cf0222ec9f33e3a80122b221da1e2c28ad5173ebefb907337c: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-k25bq/dashboard-metrics-scraper" id=21bdfb51-3c6e-4937-895d-298cc29a17b3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 06 19:52:15 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:15.485236718Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:52:15 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:15.491424112Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:52:15 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:15.491460905Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:52:15 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:15.491485816Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:52:15 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:15.494690736Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:52:15 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:15.494731688Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:52:15 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:15.494755122Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:52:15 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:15.497992395Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:52:15 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:15.498027481Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:52:15 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:15.498049381Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:52:15 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:15.501454132Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:52:15 old-k8s-version-100545 crio[648]: time="2025-10-06T19:52:15.5014914Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	f8406dc707a80       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   2                   8f76580e7f123       dashboard-metrics-scraper-5f989dc9cf-k25bq       kubernetes-dashboard
	fe27300a0ccb2       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           19 seconds ago      Running             storage-provisioner         2                   bf16ea6d41772       storage-provisioner                              kube-system
	ee2e449a1dbc6       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   32 seconds ago      Running             kubernetes-dashboard        0                   1b79f3aa813cc       kubernetes-dashboard-8694d4445c-c7sw4            kubernetes-dashboard
	a580cd1fc5132       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           49 seconds ago      Running             coredns                     1                   44f1e424bf39f       coredns-5dd5756b68-pbzhb                         kube-system
	09e6a8fae84de       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           49 seconds ago      Running             busybox                     1                   c53fc370cee8d       busybox                                          default
	d6f485a8ab137       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           49 seconds ago      Running             kindnet-cni                 1                   b7ed1e97a198e       kindnet-l292c                                    kube-system
	bc0287aa0a83e       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           49 seconds ago      Running             kube-proxy                  1                   d289175a1774b       kube-proxy-h4bcn                                 kube-system
	0b711f9a94598       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           49 seconds ago      Exited              storage-provisioner         1                   bf16ea6d41772       storage-provisioner                              kube-system
	8c6661172ed70       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           55 seconds ago      Running             kube-apiserver              1                   2e869829dbb99       kube-apiserver-old-k8s-version-100545            kube-system
	ee419682aebe0       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           55 seconds ago      Running             kube-scheduler              1                   7778ba51acb24       kube-scheduler-old-k8s-version-100545            kube-system
	a18305a0a7618       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           55 seconds ago      Running             etcd                        1                   6929c7daae5ef       etcd-old-k8s-version-100545                      kube-system
	e5a93ede6956e       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           55 seconds ago      Running             kube-controller-manager     1                   28f4ad530ff66       kube-controller-manager-old-k8s-version-100545   kube-system
	
	
	==> coredns [a580cd1fc5132fd0541524579b8c5f441ed69951103a6c3be50cfa8f18514d5b] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50085 - 50462 "HINFO IN 5305303664987629050.2966249946981424498. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023412845s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-100545
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-100545
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81
	                    minikube.k8s.io/name=old-k8s-version-100545
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_06T19_50_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 06 Oct 2025 19:50:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-100545
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 06 Oct 2025 19:52:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 06 Oct 2025 19:52:04 +0000   Mon, 06 Oct 2025 19:50:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 06 Oct 2025 19:52:04 +0000   Mon, 06 Oct 2025 19:50:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 06 Oct 2025 19:52:04 +0000   Mon, 06 Oct 2025 19:50:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 06 Oct 2025 19:52:04 +0000   Mon, 06 Oct 2025 19:50:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-100545
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 5c19813e2e1342a290c4893cbe069a28
	  System UUID:                b1b34591-7b1c-445a-99e0-f9c92bb1885f
	  Boot ID:                    28123a58-a718-41b5-bac7-da83ff2d3a0d
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-5dd5756b68-pbzhb                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     104s
	  kube-system                 etcd-old-k8s-version-100545                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         117s
	  kube-system                 kindnet-l292c                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-old-k8s-version-100545             250m (12%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-old-k8s-version-100545    200m (10%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-h4bcn                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-old-k8s-version-100545             100m (5%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-k25bq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-c7sw4             0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 104s                 kube-proxy       
	  Normal  Starting                 48s                  kube-proxy       
	  Normal  Starting                 2m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-100545 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-100545 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-100545 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node old-k8s-version-100545 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node old-k8s-version-100545 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node old-k8s-version-100545 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s                 node-controller  Node old-k8s-version-100545 event: Registered Node old-k8s-version-100545 in Controller
	  Normal  NodeReady                90s                  kubelet          Node old-k8s-version-100545 status is now: NodeReady
	  Normal  Starting                 56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)    kubelet          Node old-k8s-version-100545 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)    kubelet          Node old-k8s-version-100545 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)    kubelet          Node old-k8s-version-100545 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           37s                  node-controller  Node old-k8s-version-100545 event: Registered Node old-k8s-version-100545 in Controller
	
	
	==> dmesg <==
	[Oct 6 19:16] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:20] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:21] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:23] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:26] overlayfs: idmapped layers are currently not supported
	[ +26.009516] overlayfs: idmapped layers are currently not supported
	[  +1.884868] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:28] overlayfs: idmapped layers are currently not supported
	[ +38.864395] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:29] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:30] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:31] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:35] overlayfs: idmapped layers are currently not supported
	[ +30.685217] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:37] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:39] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:41] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:48] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:49] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:50] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:51] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a18305a0a761844f2ffb53c14f0baa5064d0e47665887eddacbafe1fa1fc5fc1] <==
	{"level":"info","ts":"2025-10-06T19:51:29.333287Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-06T19:51:29.333309Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-06T19:51:29.341286Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-06T19:51:29.34138Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-06T19:51:29.341463Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-06T19:51:29.341501Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-06T19:51:29.447799Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-06T19:51:29.447994Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-06T19:51:29.450749Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-06T19:51:29.451436Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-06T19:51:29.451511Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-06T19:51:31.13974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-06T19:51:31.139868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-06T19:51:31.139918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-06T19:51:31.139954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-06T19:51:31.139995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-06T19:51:31.140034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-06T19:51:31.140063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-06T19:51:31.145085Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-100545 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-06T19:51:31.145203Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-06T19:51:31.147449Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-06T19:51:31.147762Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-06T19:51:31.148814Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-06T19:51:31.155155Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-06T19:51:31.155248Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:52:24 up  1:34,  0 user,  load average: 1.98, 1.35, 1.54
	Linux old-k8s-version-100545 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d6f485a8ab13784b5bea939d07c6e334ff850a29eac20039c5ca941621b9c376] <==
	I1006 19:51:35.247116       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1006 19:51:35.247486       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1006 19:51:35.247650       1 main.go:148] setting mtu 1500 for CNI 
	I1006 19:51:35.247661       1 main.go:178] kindnetd IP family: "ipv4"
	I1006 19:51:35.247674       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-06T19:51:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1006 19:51:35.491924       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1006 19:51:35.491970       1 controller.go:381] "Waiting for informer caches to sync"
	I1006 19:51:35.491984       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1006 19:51:35.492425       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1006 19:52:05.492850       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1006 19:52:05.492982       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1006 19:52:05.493037       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1006 19:52:05.549470       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1006 19:52:06.992973       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1006 19:52:06.993085       1 metrics.go:72] Registering metrics
	I1006 19:52:06.993152       1 controller.go:711] "Syncing nftables rules"
	I1006 19:52:15.484263       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1006 19:52:15.484302       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8c6661172ed70b90aa4e68f8c6df968c5115fa2d90d59204af5eb1097fa7726a] <==
	I1006 19:51:33.638137       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1006 19:51:33.948190       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1006 19:51:33.952806       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1006 19:51:33.964233       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1006 19:51:33.964330       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1006 19:51:33.964527       1 shared_informer.go:318] Caches are synced for configmaps
	I1006 19:51:33.965929       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1006 19:51:33.978474       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1006 19:51:33.980492       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1006 19:51:33.983014       1 aggregator.go:166] initial CRD sync complete...
	I1006 19:51:33.983114       1 autoregister_controller.go:141] Starting autoregister controller
	I1006 19:51:33.983144       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1006 19:51:33.983173       1 cache.go:39] Caches are synced for autoregister controller
	I1006 19:51:34.001620       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1006 19:51:34.645994       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1006 19:51:37.103375       1 controller.go:624] quota admission added evaluator for: namespaces
	I1006 19:51:37.147030       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1006 19:51:37.185084       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1006 19:51:37.197476       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1006 19:51:37.208471       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1006 19:51:37.260253       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.23.159"}
	I1006 19:51:37.278347       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.173.186"}
	I1006 19:51:47.202353       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1006 19:51:47.208828       1 controller.go:624] quota admission added evaluator for: endpoints
	I1006 19:51:47.214749       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [e5a93ede6956ec8f39200e0ca417de2ddcc0803039e26b14fc6c083f2a84d73c] <==
	I1006 19:51:47.272777       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-c7sw4"
	I1006 19:51:47.283840       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-k25bq"
	I1006 19:51:47.292158       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="55.498993ms"
	I1006 19:51:47.301373       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.174264ms"
	I1006 19:51:47.307327       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.099583ms"
	I1006 19:51:47.307568       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="60.432µs"
	I1006 19:51:47.312863       1 shared_informer.go:318] Caches are synced for resource quota
	I1006 19:51:47.316285       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="14.838016ms"
	I1006 19:51:47.326439       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="90.003µs"
	I1006 19:51:47.343358       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="26.934734ms"
	I1006 19:51:47.343487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.279µs"
	I1006 19:51:47.344848       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="92.851µs"
	I1006 19:51:47.381091       1 shared_informer.go:318] Caches are synced for attach detach
	I1006 19:51:47.754332       1 shared_informer.go:318] Caches are synced for garbage collector
	I1006 19:51:47.767862       1 shared_informer.go:318] Caches are synced for garbage collector
	I1006 19:51:47.767908       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1006 19:51:52.554792       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="21.607072ms"
	I1006 19:51:52.554909       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="78.508µs"
	I1006 19:51:56.553551       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="148.368µs"
	I1006 19:51:57.557465       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.643µs"
	I1006 19:51:58.555660       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.815µs"
	I1006 19:52:05.445970       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.085135ms"
	I1006 19:52:05.446052       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="47.091µs"
	I1006 19:52:13.617839       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.575µs"
	I1006 19:52:17.618729       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="52.538µs"
	
	
	==> kube-proxy [bc0287aa0a83ee0795d413e51de320c7bf5087a5050b203fd68bfaaa5765f74e] <==
	I1006 19:51:36.271636       1 server_others.go:69] "Using iptables proxy"
	I1006 19:51:36.318897       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1006 19:51:36.456529       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 19:51:36.476057       1 server_others.go:152] "Using iptables Proxier"
	I1006 19:51:36.476101       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1006 19:51:36.476114       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1006 19:51:36.476173       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1006 19:51:36.476465       1 server.go:846] "Version info" version="v1.28.0"
	I1006 19:51:36.476482       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:51:36.478215       1 config.go:188] "Starting service config controller"
	I1006 19:51:36.478250       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1006 19:51:36.478276       1 config.go:97] "Starting endpoint slice config controller"
	I1006 19:51:36.478281       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1006 19:51:36.491956       1 config.go:315] "Starting node config controller"
	I1006 19:51:36.491979       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1006 19:51:36.580230       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1006 19:51:36.580291       1 shared_informer.go:318] Caches are synced for service config
	I1006 19:51:36.600585       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [ee419682aebe0e3fd692c174ff777858ffe04bf8c1a05f5717da689629decf7a] <==
	I1006 19:51:30.833262       1 serving.go:348] Generated self-signed cert in-memory
	W1006 19:51:33.708070       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1006 19:51:33.708113       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1006 19:51:33.708124       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1006 19:51:33.708132       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1006 19:51:33.851485       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1006 19:51:33.851596       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:51:33.853523       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1006 19:51:33.853741       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:51:33.853789       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1006 19:51:33.860359       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W1006 19:51:33.942706       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1006 19:51:33.942805       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1006 19:51:33.942903       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1006 19:51:33.942943       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1006 19:51:33.943035       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1006 19:51:33.943070       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1006 19:51:33.943627       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1006 19:51:33.943688       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1006 19:51:33.967151       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1006 19:51:33.967643       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1006 19:51:35.268032       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 06 19:51:47 old-k8s-version-100545 kubelet[775]: I1006 19:51:47.340599     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ba0aac8a-6f26-492c-859a-fbc86c90eb65-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-k25bq\" (UID: \"ba0aac8a-6f26-492c-859a-fbc86c90eb65\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-k25bq"
	Oct 06 19:51:47 old-k8s-version-100545 kubelet[775]: I1006 19:51:47.340719     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/953e3fd1-a661-4e2a-9079-eeeb2e0e3746-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-c7sw4\" (UID: \"953e3fd1-a661-4e2a-9079-eeeb2e0e3746\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-c7sw4"
	Oct 06 19:51:47 old-k8s-version-100545 kubelet[775]: I1006 19:51:47.340827     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td8xp\" (UniqueName: \"kubernetes.io/projected/953e3fd1-a661-4e2a-9079-eeeb2e0e3746-kube-api-access-td8xp\") pod \"kubernetes-dashboard-8694d4445c-c7sw4\" (UID: \"953e3fd1-a661-4e2a-9079-eeeb2e0e3746\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-c7sw4"
	Oct 06 19:51:47 old-k8s-version-100545 kubelet[775]: W1006 19:51:47.640086     775 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/44567b8f0b33f8cfbca0e390407ff65fe565a80973c93328cbb178c5ca076b3b/crio-8f76580e7f1233f860e0041de83572c4d021fe969e5975e1b9170fb4eed66b87 WatchSource:0}: Error finding container 8f76580e7f1233f860e0041de83572c4d021fe969e5975e1b9170fb4eed66b87: Status 404 returned error can't find the container with id 8f76580e7f1233f860e0041de83572c4d021fe969e5975e1b9170fb4eed66b87
	Oct 06 19:51:47 old-k8s-version-100545 kubelet[775]: W1006 19:51:47.640436     775 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/44567b8f0b33f8cfbca0e390407ff65fe565a80973c93328cbb178c5ca076b3b/crio-1b79f3aa813cc9c0e40607bce91a8cbef2deebe431b888c64281b2bef64d0cbb WatchSource:0}: Error finding container 1b79f3aa813cc9c0e40607bce91a8cbef2deebe431b888c64281b2bef64d0cbb: Status 404 returned error can't find the container with id 1b79f3aa813cc9c0e40607bce91a8cbef2deebe431b888c64281b2bef64d0cbb
	Oct 06 19:51:56 old-k8s-version-100545 kubelet[775]: I1006 19:51:56.532626     775 scope.go:117] "RemoveContainer" containerID="b2dff1f776cbda15b9146d9dc0b5c45b9e942d4ae7d8fee34936aa9fd9d49383"
	Oct 06 19:51:56 old-k8s-version-100545 kubelet[775]: I1006 19:51:56.554507     775 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-c7sw4" podStartSLOduration=5.131379207 podCreationTimestamp="2025-10-06 19:51:47 +0000 UTC" firstStartedPulling="2025-10-06 19:51:47.64680959 +0000 UTC m=+19.572554147" lastFinishedPulling="2025-10-06 19:51:52.069854077 +0000 UTC m=+23.995598634" observedRunningTime="2025-10-06 19:51:52.540663209 +0000 UTC m=+24.466407774" watchObservedRunningTime="2025-10-06 19:51:56.554423694 +0000 UTC m=+28.480168259"
	Oct 06 19:51:57 old-k8s-version-100545 kubelet[775]: I1006 19:51:57.536663     775 scope.go:117] "RemoveContainer" containerID="b2dff1f776cbda15b9146d9dc0b5c45b9e942d4ae7d8fee34936aa9fd9d49383"
	Oct 06 19:51:57 old-k8s-version-100545 kubelet[775]: I1006 19:51:57.536994     775 scope.go:117] "RemoveContainer" containerID="4835534f30a015cf0222ec9f33e3a80122b221da1e2c28ad5173ebefb907337c"
	Oct 06 19:51:57 old-k8s-version-100545 kubelet[775]: E1006 19:51:57.537268     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-k25bq_kubernetes-dashboard(ba0aac8a-6f26-492c-859a-fbc86c90eb65)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-k25bq" podUID="ba0aac8a-6f26-492c-859a-fbc86c90eb65"
	Oct 06 19:51:58 old-k8s-version-100545 kubelet[775]: I1006 19:51:58.540857     775 scope.go:117] "RemoveContainer" containerID="4835534f30a015cf0222ec9f33e3a80122b221da1e2c28ad5173ebefb907337c"
	Oct 06 19:51:58 old-k8s-version-100545 kubelet[775]: E1006 19:51:58.541141     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-k25bq_kubernetes-dashboard(ba0aac8a-6f26-492c-859a-fbc86c90eb65)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-k25bq" podUID="ba0aac8a-6f26-492c-859a-fbc86c90eb65"
	Oct 06 19:51:59 old-k8s-version-100545 kubelet[775]: I1006 19:51:59.543310     775 scope.go:117] "RemoveContainer" containerID="4835534f30a015cf0222ec9f33e3a80122b221da1e2c28ad5173ebefb907337c"
	Oct 06 19:51:59 old-k8s-version-100545 kubelet[775]: E1006 19:51:59.543601     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-k25bq_kubernetes-dashboard(ba0aac8a-6f26-492c-859a-fbc86c90eb65)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-k25bq" podUID="ba0aac8a-6f26-492c-859a-fbc86c90eb65"
	Oct 06 19:52:05 old-k8s-version-100545 kubelet[775]: I1006 19:52:05.571685     775 scope.go:117] "RemoveContainer" containerID="0b711f9a9459848bc688692b203cbbb20fe8f51db6e39706fde7b2385746a690"
	Oct 06 19:52:13 old-k8s-version-100545 kubelet[775]: I1006 19:52:13.370061     775 scope.go:117] "RemoveContainer" containerID="4835534f30a015cf0222ec9f33e3a80122b221da1e2c28ad5173ebefb907337c"
	Oct 06 19:52:13 old-k8s-version-100545 kubelet[775]: I1006 19:52:13.593485     775 scope.go:117] "RemoveContainer" containerID="4835534f30a015cf0222ec9f33e3a80122b221da1e2c28ad5173ebefb907337c"
	Oct 06 19:52:13 old-k8s-version-100545 kubelet[775]: I1006 19:52:13.593931     775 scope.go:117] "RemoveContainer" containerID="f8406dc707a80365322f2b9f5d84aec9974cb6a529db4ce3e4077d25f365ebc2"
	Oct 06 19:52:13 old-k8s-version-100545 kubelet[775]: E1006 19:52:13.594291     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-k25bq_kubernetes-dashboard(ba0aac8a-6f26-492c-859a-fbc86c90eb65)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-k25bq" podUID="ba0aac8a-6f26-492c-859a-fbc86c90eb65"
	Oct 06 19:52:17 old-k8s-version-100545 kubelet[775]: I1006 19:52:17.603675     775 scope.go:117] "RemoveContainer" containerID="f8406dc707a80365322f2b9f5d84aec9974cb6a529db4ce3e4077d25f365ebc2"
	Oct 06 19:52:17 old-k8s-version-100545 kubelet[775]: E1006 19:52:17.604002     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-k25bq_kubernetes-dashboard(ba0aac8a-6f26-492c-859a-fbc86c90eb65)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-k25bq" podUID="ba0aac8a-6f26-492c-859a-fbc86c90eb65"
	Oct 06 19:52:19 old-k8s-version-100545 kubelet[775]: I1006 19:52:19.467789     775 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 06 19:52:19 old-k8s-version-100545 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 06 19:52:19 old-k8s-version-100545 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 06 19:52:19 old-k8s-version-100545 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [ee2e449a1dbc6e891f301528edaf5f304cc37cb287756a88bd15304dfbe66e59] <==
	2025/10/06 19:51:52 Starting overwatch
	2025/10/06 19:51:52 Using namespace: kubernetes-dashboard
	2025/10/06 19:51:52 Using in-cluster config to connect to apiserver
	2025/10/06 19:51:52 Using secret token for csrf signing
	2025/10/06 19:51:52 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/06 19:51:52 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/06 19:51:52 Successful initial request to the apiserver, version: v1.28.0
	2025/10/06 19:51:52 Generating JWE encryption key
	2025/10/06 19:51:52 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/06 19:51:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/06 19:51:52 Initializing JWE encryption key from synchronized object
	2025/10/06 19:51:52 Creating in-cluster Sidecar client
	2025/10/06 19:51:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/06 19:51:52 Serving insecurely on HTTP port: 9090
	2025/10/06 19:52:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [0b711f9a9459848bc688692b203cbbb20fe8f51db6e39706fde7b2385746a690] <==
	I1006 19:51:35.526165       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1006 19:52:05.549128       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fe27300a0ccb2d60aa2a92ac42db7ebc34ff4c80d2bd745b7157749d807ccd59] <==
	I1006 19:52:05.634594       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1006 19:52:05.648006       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1006 19:52:05.648679       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1006 19:52:23.061506       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1006 19:52:23.061762       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-100545_e72cbd91-d0ba-4937-b295-395cc8b7ab59!
	I1006 19:52:23.062047       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"411b65f9-f70e-49ad-ba6c-89d8933e38af", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-100545_e72cbd91-d0ba-4937-b295-395cc8b7ab59 became leader
	I1006 19:52:23.164757       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-100545_e72cbd91-d0ba-4937-b295-395cc8b7ab59!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-100545 -n old-k8s-version-100545
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-100545 -n old-k8s-version-100545: exit status 2 (406.817129ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-100545 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-314275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-314275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (350.380115ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:53:55Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-314275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-314275 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-314275 describe deploy/metrics-server -n kube-system: exit status 1 (131.374222ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-314275 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-314275
helpers_test.go:243: (dbg) docker inspect no-preload-314275:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3b7c30b4fccfef647cdb0e7ecc61769f9844c54f8180f8640cdce7f83fc9c9ab",
	        "Created": "2025-10-06T19:52:30.053793791Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 192627,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T19:52:30.189762916Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/3b7c30b4fccfef647cdb0e7ecc61769f9844c54f8180f8640cdce7f83fc9c9ab/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3b7c30b4fccfef647cdb0e7ecc61769f9844c54f8180f8640cdce7f83fc9c9ab/hostname",
	        "HostsPath": "/var/lib/docker/containers/3b7c30b4fccfef647cdb0e7ecc61769f9844c54f8180f8640cdce7f83fc9c9ab/hosts",
	        "LogPath": "/var/lib/docker/containers/3b7c30b4fccfef647cdb0e7ecc61769f9844c54f8180f8640cdce7f83fc9c9ab/3b7c30b4fccfef647cdb0e7ecc61769f9844c54f8180f8640cdce7f83fc9c9ab-json.log",
	        "Name": "/no-preload-314275",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-314275:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-314275",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3b7c30b4fccfef647cdb0e7ecc61769f9844c54f8180f8640cdce7f83fc9c9ab",
	                "LowerDir": "/var/lib/docker/overlay2/6baf105e7753dc1d2374b161686b0bc798240cf018581905f7dd4f21447d1a20-init/diff:/var/lib/docker/overlay2/950dad8a7486c25883a9845454dded3949e8f3a53166d005e6749ff210bcab80/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6baf105e7753dc1d2374b161686b0bc798240cf018581905f7dd4f21447d1a20/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6baf105e7753dc1d2374b161686b0bc798240cf018581905f7dd4f21447d1a20/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6baf105e7753dc1d2374b161686b0bc798240cf018581905f7dd4f21447d1a20/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-314275",
	                "Source": "/var/lib/docker/volumes/no-preload-314275/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-314275",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-314275",
	                "name.minikube.sigs.k8s.io": "no-preload-314275",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2b04be342cca123a256cd1c2983440304995ec4155b67d1e03725d9394358108",
	            "SandboxKey": "/var/run/docker/netns/2b04be342cca",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-314275": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:48:39:fa:25:e8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b693310dd981b3558dbfee81926e93addf9d9e76e4588123249599a4c1c5d16e",
	                    "EndpointID": "df0658e542f7d007429975d4e56ed54f3d4c881974aa7623c239ad06971fc95f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-314275",
	                        "3b7c30b4fccf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-314275 -n no-preload-314275
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-314275 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-314275 logs -n 25: (1.454903834s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-053944 sudo crio config                                                                                                                                                                                                             │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ delete  │ -p cilium-053944                                                                                                                                                                                                                              │ cilium-053944             │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │ 06 Oct 25 19:40 UTC │
	│ start   │ -p force-systemd-env-760371 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-760371  │ jenkins │ v1.37.0 │ 06 Oct 25 19:40 UTC │                     │
	│ ssh     │ force-systemd-flag-203169 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-203169 │ jenkins │ v1.37.0 │ 06 Oct 25 19:47 UTC │ 06 Oct 25 19:47 UTC │
	│ delete  │ -p force-systemd-flag-203169                                                                                                                                                                                                                  │ force-systemd-flag-203169 │ jenkins │ v1.37.0 │ 06 Oct 25 19:47 UTC │ 06 Oct 25 19:47 UTC │
	│ start   │ -p cert-expiration-585086 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-585086    │ jenkins │ v1.37.0 │ 06 Oct 25 19:47 UTC │ 06 Oct 25 19:48 UTC │
	│ delete  │ -p force-systemd-env-760371                                                                                                                                                                                                                   │ force-systemd-env-760371  │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ start   │ -p cert-options-593131 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-593131       │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ ssh     │ cert-options-593131 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-593131       │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ ssh     │ -p cert-options-593131 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-593131       │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ delete  │ -p cert-options-593131                                                                                                                                                                                                                        │ cert-options-593131       │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ start   │ -p old-k8s-version-100545 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:50 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-100545 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │                     │
	│ stop    │ -p old-k8s-version-100545 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │ 06 Oct 25 19:51 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-100545 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │ 06 Oct 25 19:51 UTC │
	│ start   │ -p old-k8s-version-100545 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │ 06 Oct 25 19:52 UTC │
	│ start   │ -p cert-expiration-585086 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-585086    │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │ 06 Oct 25 19:53 UTC │
	│ image   │ old-k8s-version-100545 image list --format=json                                                                                                                                                                                               │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:52 UTC │
	│ pause   │ -p old-k8s-version-100545 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │                     │
	│ delete  │ -p old-k8s-version-100545                                                                                                                                                                                                                     │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:52 UTC │
	│ delete  │ -p old-k8s-version-100545                                                                                                                                                                                                                     │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:52 UTC │
	│ start   │ -p no-preload-314275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-314275         │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:53 UTC │
	│ delete  │ -p cert-expiration-585086                                                                                                                                                                                                                     │ cert-expiration-585086    │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │ 06 Oct 25 19:53 UTC │
	│ start   │ -p embed-certs-830393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-830393        │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-314275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-314275         │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 19:53:24
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 19:53:24.650090  195795 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:53:24.650211  195795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:53:24.650216  195795 out.go:374] Setting ErrFile to fd 2...
	I1006 19:53:24.650220  195795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:53:24.650483  195795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:53:24.651003  195795 out.go:368] Setting JSON to false
	I1006 19:53:24.651988  195795 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5740,"bootTime":1759774665,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1006 19:53:24.652062  195795 start.go:140] virtualization:  
	I1006 19:53:24.656224  195795 out.go:179] * [embed-certs-830393] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 19:53:24.659588  195795 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 19:53:24.659633  195795 notify.go:220] Checking for updates...
	I1006 19:53:24.666213  195795 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 19:53:24.669370  195795 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:53:24.672468  195795 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	I1006 19:53:24.675855  195795 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 19:53:24.678871  195795 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 19:53:24.682582  195795 config.go:182] Loaded profile config "no-preload-314275": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:53:24.682729  195795 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 19:53:24.709226  195795 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 19:53:24.709391  195795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:53:24.771283  195795 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:53:24.761909325 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:53:24.771397  195795 docker.go:318] overlay module found
	I1006 19:53:24.774799  195795 out.go:179] * Using the docker driver based on user configuration
	I1006 19:53:24.779386  195795 start.go:304] selected driver: docker
	I1006 19:53:24.779420  195795 start.go:924] validating driver "docker" against <nil>
	I1006 19:53:24.779434  195795 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 19:53:24.780164  195795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:53:24.844978  195795 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:53:24.836036853 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:53:24.845130  195795 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 19:53:24.845363  195795 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 19:53:24.848411  195795 out.go:179] * Using Docker driver with root privileges
	I1006 19:53:24.851146  195795 cni.go:84] Creating CNI manager for ""
	I1006 19:53:24.851213  195795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:53:24.851230  195795 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 19:53:24.851308  195795 start.go:348] cluster config:
	{Name:embed-certs-830393 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-830393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:53:24.854487  195795 out.go:179] * Starting "embed-certs-830393" primary control-plane node in "embed-certs-830393" cluster
	I1006 19:53:24.857373  195795 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 19:53:24.860264  195795 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 19:53:24.863077  195795 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:53:24.863140  195795 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1006 19:53:24.863155  195795 cache.go:58] Caching tarball of preloaded images
	I1006 19:53:24.863251  195795 preload.go:233] Found /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1006 19:53:24.863265  195795 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 19:53:24.863374  195795 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/config.json ...
	I1006 19:53:24.863399  195795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/config.json: {Name:mk7b8ff5ec1ca113ba31e33ea0c394571b430ad1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:53:24.863581  195795 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 19:53:24.891389  195795 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 19:53:24.891411  195795 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 19:53:24.891424  195795 cache.go:232] Successfully downloaded all kic artifacts
	I1006 19:53:24.891447  195795 start.go:360] acquireMachinesLock for embed-certs-830393: {Name:mk9482698940ed15367c12951e7ada37afdeab68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 19:53:24.891555  195795 start.go:364] duration metric: took 86.779µs to acquireMachinesLock for "embed-certs-830393"
	I1006 19:53:24.891585  195795 start.go:93] Provisioning new machine with config: &{Name:embed-certs-830393 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-830393 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 19:53:24.891667  195795 start.go:125] createHost starting for "" (driver="docker")
	I1006 19:53:23.954913  192329 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:53:24.454158  192329 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:53:24.954225  192329 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:53:25.454074  192329 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:53:25.954560  192329 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:53:26.454148  192329 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:53:26.690854  192329 kubeadm.go:1113] duration metric: took 3.858699091s to wait for elevateKubeSystemPrivileges
	I1006 19:53:26.690880  192329 kubeadm.go:402] duration metric: took 30.044893495s to StartCluster
	I1006 19:53:26.690896  192329 settings.go:142] acquiring lock: {Name:mkaade96c66593c653256adaeb0a3029ca60e0c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:53:26.690955  192329 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:53:26.691606  192329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/kubeconfig: {Name:mkf8cce8f9dace5c4d41967d6ad8df5e03ba53f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:53:26.691829  192329 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 19:53:26.691986  192329 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1006 19:53:26.692251  192329 config.go:182] Loaded profile config "no-preload-314275": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:53:26.692290  192329 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 19:53:26.692359  192329 addons.go:69] Setting storage-provisioner=true in profile "no-preload-314275"
	I1006 19:53:26.692373  192329 addons.go:238] Setting addon storage-provisioner=true in "no-preload-314275"
	I1006 19:53:26.692393  192329 host.go:66] Checking if "no-preload-314275" exists ...
	I1006 19:53:26.692886  192329 cli_runner.go:164] Run: docker container inspect no-preload-314275 --format={{.State.Status}}
	I1006 19:53:26.693410  192329 addons.go:69] Setting default-storageclass=true in profile "no-preload-314275"
	I1006 19:53:26.693430  192329 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-314275"
	I1006 19:53:26.693701  192329 cli_runner.go:164] Run: docker container inspect no-preload-314275 --format={{.State.Status}}
	I1006 19:53:26.697640  192329 out.go:179] * Verifying Kubernetes components...
	I1006 19:53:26.703480  192329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:53:26.743262  192329 addons.go:238] Setting addon default-storageclass=true in "no-preload-314275"
	I1006 19:53:26.743306  192329 host.go:66] Checking if "no-preload-314275" exists ...
	I1006 19:53:26.743969  192329 cli_runner.go:164] Run: docker container inspect no-preload-314275 --format={{.State.Status}}
	I1006 19:53:26.744764  192329 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 19:53:26.747897  192329 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 19:53:26.747918  192329 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 19:53:26.747985  192329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-314275
	I1006 19:53:26.773533  192329 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 19:53:26.773554  192329 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 19:53:26.773652  192329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-314275
	I1006 19:53:26.789163  192329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/no-preload-314275/id_rsa Username:docker}
	I1006 19:53:26.812381  192329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/no-preload-314275/id_rsa Username:docker}
	I1006 19:53:27.154054  192329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 19:53:27.155925  192329 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1006 19:53:27.156085  192329 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:53:27.213266  192329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 19:53:28.093880  192329 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1006 19:53:28.096658  192329 node_ready.go:35] waiting up to 6m0s for node "no-preload-314275" to be "Ready" ...
	I1006 19:53:28.605906  192329 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-314275" context rescaled to 1 replicas
	I1006 19:53:28.630069  192329 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.416717162s)
	I1006 19:53:28.634235  192329 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1006 19:53:28.637597  192329 addons.go:514] duration metric: took 1.945277586s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1006 19:53:24.896976  195795 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1006 19:53:24.897256  195795 start.go:159] libmachine.API.Create for "embed-certs-830393" (driver="docker")
	I1006 19:53:24.897302  195795 client.go:168] LocalClient.Create starting
	I1006 19:53:24.897381  195795 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem
	I1006 19:53:24.897450  195795 main.go:141] libmachine: Decoding PEM data...
	I1006 19:53:24.897470  195795 main.go:141] libmachine: Parsing certificate...
	I1006 19:53:24.897535  195795 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem
	I1006 19:53:24.897561  195795 main.go:141] libmachine: Decoding PEM data...
	I1006 19:53:24.897577  195795 main.go:141] libmachine: Parsing certificate...
	I1006 19:53:24.897927  195795 cli_runner.go:164] Run: docker network inspect embed-certs-830393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 19:53:24.913551  195795 cli_runner.go:211] docker network inspect embed-certs-830393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 19:53:24.913657  195795 network_create.go:284] running [docker network inspect embed-certs-830393] to gather additional debugging logs...
	I1006 19:53:24.913742  195795 cli_runner.go:164] Run: docker network inspect embed-certs-830393
	W1006 19:53:24.934885  195795 cli_runner.go:211] docker network inspect embed-certs-830393 returned with exit code 1
	I1006 19:53:24.934914  195795 network_create.go:287] error running [docker network inspect embed-certs-830393]: docker network inspect embed-certs-830393: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-830393 not found
	I1006 19:53:24.934929  195795 network_create.go:289] output of [docker network inspect embed-certs-830393]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-830393 not found
	
	** /stderr **
	I1006 19:53:24.935031  195795 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:53:24.951584  195795 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7058eae896da IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:57:c0:bd:de:1b} reservation:<nil>}
	I1006 19:53:24.951993  195795 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e321ce8cf6dc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:12:e6:76:87:fe:bb} reservation:<nil>}
	I1006 19:53:24.952334  195795 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-17bdbd7bd7be IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:22:93:f3:19:55:10} reservation:<nil>}
	I1006 19:53:24.952558  195795 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b693310dd981 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:52:4e:f5:c1:e3:5f} reservation:<nil>}
	I1006 19:53:24.952969  195795 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b3f10}
	I1006 19:53:24.952992  195795 network_create.go:124] attempt to create docker network embed-certs-830393 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1006 19:53:24.953054  195795 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-830393 embed-certs-830393
	I1006 19:53:25.031843  195795 network_create.go:108] docker network embed-certs-830393 192.168.85.0/24 created
	I1006 19:53:25.031897  195795 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-830393" container
	I1006 19:53:25.031977  195795 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 19:53:25.054158  195795 cli_runner.go:164] Run: docker volume create embed-certs-830393 --label name.minikube.sigs.k8s.io=embed-certs-830393 --label created_by.minikube.sigs.k8s.io=true
	I1006 19:53:25.072724  195795 oci.go:103] Successfully created a docker volume embed-certs-830393
	I1006 19:53:25.072808  195795 cli_runner.go:164] Run: docker run --rm --name embed-certs-830393-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-830393 --entrypoint /usr/bin/test -v embed-certs-830393:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 19:53:25.714087  195795 oci.go:107] Successfully prepared a docker volume embed-certs-830393
	I1006 19:53:25.714130  195795 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:53:25.714149  195795 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 19:53:25.714210  195795 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-830393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	W1006 19:53:30.101237  192329 node_ready.go:57] node "no-preload-314275" has "Ready":"False" status (will retry)
	W1006 19:53:32.104027  192329 node_ready.go:57] node "no-preload-314275" has "Ready":"False" status (will retry)
	I1006 19:53:31.831245  195795 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-830393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (6.116996172s)
	I1006 19:53:31.831277  195795 kic.go:203] duration metric: took 6.117125052s to extract preloaded images to volume ...
	W1006 19:53:31.831429  195795 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1006 19:53:31.831554  195795 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 19:53:31.938761  195795 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-830393 --name embed-certs-830393 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-830393 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-830393 --network embed-certs-830393 --ip 192.168.85.2 --volume embed-certs-830393:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 19:53:32.271088  195795 cli_runner.go:164] Run: docker container inspect embed-certs-830393 --format={{.State.Running}}
	I1006 19:53:32.296808  195795 cli_runner.go:164] Run: docker container inspect embed-certs-830393 --format={{.State.Status}}
	I1006 19:53:32.320538  195795 cli_runner.go:164] Run: docker exec embed-certs-830393 stat /var/lib/dpkg/alternatives/iptables
	I1006 19:53:32.463100  195795 oci.go:144] the created container "embed-certs-830393" has a running status.
	I1006 19:53:32.463135  195795 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/embed-certs-830393/id_rsa...
	I1006 19:53:33.373101  195795 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-2540/.minikube/machines/embed-certs-830393/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 19:53:33.394532  195795 cli_runner.go:164] Run: docker container inspect embed-certs-830393 --format={{.State.Status}}
	I1006 19:53:33.412730  195795 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 19:53:33.412754  195795 kic_runner.go:114] Args: [docker exec --privileged embed-certs-830393 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 19:53:33.455032  195795 cli_runner.go:164] Run: docker container inspect embed-certs-830393 --format={{.State.Status}}
	I1006 19:53:33.473405  195795 machine.go:93] provisionDockerMachine start ...
	I1006 19:53:33.473523  195795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-830393
	I1006 19:53:33.491601  195795 main.go:141] libmachine: Using SSH client type: native
	I1006 19:53:33.492013  195795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33065 <nil> <nil>}
	I1006 19:53:33.492030  195795 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 19:53:33.493216  195795 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45354->127.0.0.1:33065: read: connection reset by peer
	W1006 19:53:34.599666  192329 node_ready.go:57] node "no-preload-314275" has "Ready":"False" status (will retry)
	W1006 19:53:36.599882  192329 node_ready.go:57] node "no-preload-314275" has "Ready":"False" status (will retry)
	W1006 19:53:38.600603  192329 node_ready.go:57] node "no-preload-314275" has "Ready":"False" status (will retry)
	I1006 19:53:36.631415  195795 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-830393
	
	I1006 19:53:36.631451  195795 ubuntu.go:182] provisioning hostname "embed-certs-830393"
	I1006 19:53:36.631532  195795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-830393
	I1006 19:53:36.649165  195795 main.go:141] libmachine: Using SSH client type: native
	I1006 19:53:36.649471  195795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33065 <nil> <nil>}
	I1006 19:53:36.649490  195795 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-830393 && echo "embed-certs-830393" | sudo tee /etc/hostname
	I1006 19:53:36.789084  195795 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-830393
	
	I1006 19:53:36.789184  195795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-830393
	I1006 19:53:36.806463  195795 main.go:141] libmachine: Using SSH client type: native
	I1006 19:53:36.806773  195795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33065 <nil> <nil>}
	I1006 19:53:36.806798  195795 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-830393' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-830393/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-830393' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 19:53:36.943748  195795 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 19:53:36.943775  195795 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-2540/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-2540/.minikube}
	I1006 19:53:36.943794  195795 ubuntu.go:190] setting up certificates
	I1006 19:53:36.943804  195795 provision.go:84] configureAuth start
	I1006 19:53:36.943862  195795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-830393
	I1006 19:53:36.961069  195795 provision.go:143] copyHostCerts
	I1006 19:53:36.961137  195795 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem, removing ...
	I1006 19:53:36.961152  195795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem
	I1006 19:53:36.961233  195795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem (1082 bytes)
	I1006 19:53:36.961340  195795 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem, removing ...
	I1006 19:53:36.961353  195795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem
	I1006 19:53:36.961381  195795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem (1123 bytes)
	I1006 19:53:36.961442  195795 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem, removing ...
	I1006 19:53:36.961450  195795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem
	I1006 19:53:36.961475  195795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem (1675 bytes)
	I1006 19:53:36.961539  195795 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem org=jenkins.embed-certs-830393 san=[127.0.0.1 192.168.85.2 embed-certs-830393 localhost minikube]
	I1006 19:53:37.311635  195795 provision.go:177] copyRemoteCerts
	I1006 19:53:37.311720  195795 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 19:53:37.311760  195795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-830393
	I1006 19:53:37.331415  195795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/embed-certs-830393/id_rsa Username:docker}
	I1006 19:53:37.427582  195795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 19:53:37.446116  195795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 19:53:37.464502  195795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1006 19:53:37.483435  195795 provision.go:87] duration metric: took 539.617036ms to configureAuth
	I1006 19:53:37.483459  195795 ubuntu.go:206] setting minikube options for container-runtime
	I1006 19:53:37.483652  195795 config.go:182] Loaded profile config "embed-certs-830393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:53:37.483794  195795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-830393
	I1006 19:53:37.501808  195795 main.go:141] libmachine: Using SSH client type: native
	I1006 19:53:37.502119  195795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33065 <nil> <nil>}
	I1006 19:53:37.502137  195795 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 19:53:37.776627  195795 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 19:53:37.776649  195795 machine.go:96] duration metric: took 4.303216091s to provisionDockerMachine
	I1006 19:53:37.776659  195795 client.go:171] duration metric: took 12.879345928s to LocalClient.Create
	I1006 19:53:37.776675  195795 start.go:167] duration metric: took 12.87942044s to libmachine.API.Create "embed-certs-830393"
	I1006 19:53:37.776682  195795 start.go:293] postStartSetup for "embed-certs-830393" (driver="docker")
	I1006 19:53:37.776692  195795 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 19:53:37.776771  195795 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 19:53:37.776814  195795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-830393
	I1006 19:53:37.795121  195795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/embed-certs-830393/id_rsa Username:docker}
	I1006 19:53:37.892566  195795 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 19:53:37.895891  195795 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 19:53:37.895921  195795 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 19:53:37.895932  195795 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/addons for local assets ...
	I1006 19:53:37.895986  195795 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/files for local assets ...
	I1006 19:53:37.896068  195795 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> 43502.pem in /etc/ssl/certs
	I1006 19:53:37.896176  195795 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 19:53:37.905089  195795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:53:37.924450  195795 start.go:296] duration metric: took 147.75352ms for postStartSetup
	I1006 19:53:37.924804  195795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-830393
	I1006 19:53:37.941732  195795 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/config.json ...
	I1006 19:53:37.942010  195795 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 19:53:37.942050  195795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-830393
	I1006 19:53:37.958552  195795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/embed-certs-830393/id_rsa Username:docker}
	I1006 19:53:38.054119  195795 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 19:53:38.059590  195795 start.go:128] duration metric: took 13.167908906s to createHost
	I1006 19:53:38.059619  195795 start.go:83] releasing machines lock for "embed-certs-830393", held for 13.168050783s
	I1006 19:53:38.059693  195795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-830393
	I1006 19:53:38.078973  195795 ssh_runner.go:195] Run: cat /version.json
	I1006 19:53:38.079008  195795 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 19:53:38.079030  195795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-830393
	I1006 19:53:38.079074  195795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-830393
	I1006 19:53:38.104040  195795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/embed-certs-830393/id_rsa Username:docker}
	I1006 19:53:38.117296  195795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/embed-certs-830393/id_rsa Username:docker}
	I1006 19:53:38.308539  195795 ssh_runner.go:195] Run: systemctl --version
	I1006 19:53:38.315362  195795 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 19:53:38.353220  195795 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 19:53:38.357766  195795 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 19:53:38.357858  195795 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 19:53:38.388368  195795 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1006 19:53:38.388401  195795 start.go:495] detecting cgroup driver to use...
	I1006 19:53:38.388450  195795 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 19:53:38.388519  195795 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 19:53:38.407647  195795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 19:53:38.420880  195795 docker.go:218] disabling cri-docker service (if available) ...
	I1006 19:53:38.421022  195795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 19:53:38.439627  195795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 19:53:38.458054  195795 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 19:53:38.587291  195795 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 19:53:38.717122  195795 docker.go:234] disabling docker service ...
	I1006 19:53:38.717202  195795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 19:53:38.740062  195795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 19:53:38.754195  195795 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 19:53:38.881295  195795 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 19:53:39.004531  195795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 19:53:39.022843  195795 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 19:53:39.039055  195795 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 19:53:39.039208  195795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:53:39.049357  195795 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 19:53:39.049439  195795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:53:39.065491  195795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:53:39.075785  195795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:53:39.087774  195795 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 19:53:39.100927  195795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:53:39.110964  195795 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:53:39.126933  195795 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:53:39.136334  195795 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 19:53:39.143878  195795 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 19:53:39.151821  195795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:53:39.280417  195795 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 19:53:39.416194  195795 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 19:53:39.416322  195795 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 19:53:39.420733  195795 start.go:563] Will wait 60s for crictl version
	I1006 19:53:39.420873  195795 ssh_runner.go:195] Run: which crictl
	I1006 19:53:39.425389  195795 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 19:53:39.451071  195795 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 19:53:39.451236  195795 ssh_runner.go:195] Run: crio --version
	I1006 19:53:39.483034  195795 ssh_runner.go:195] Run: crio --version
	I1006 19:53:39.519541  195795 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 19:53:39.522265  195795 cli_runner.go:164] Run: docker network inspect embed-certs-830393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:53:39.539678  195795 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1006 19:53:39.543627  195795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:53:39.553620  195795 kubeadm.go:883] updating cluster {Name:embed-certs-830393 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-830393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 19:53:39.553780  195795 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:53:39.553837  195795 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:53:39.589868  195795 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:53:39.589891  195795 crio.go:433] Images already preloaded, skipping extraction
	I1006 19:53:39.589951  195795 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:53:39.622777  195795 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:53:39.622797  195795 cache_images.go:85] Images are preloaded, skipping loading
	I1006 19:53:39.622805  195795 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1006 19:53:39.622891  195795 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-830393 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-830393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 19:53:39.622970  195795 ssh_runner.go:195] Run: crio config
	I1006 19:53:39.680154  195795 cni.go:84] Creating CNI manager for ""
	I1006 19:53:39.680185  195795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:53:39.680199  195795 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 19:53:39.680276  195795 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-830393 NodeName:embed-certs-830393 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 19:53:39.680415  195795 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-830393"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 19:53:39.680491  195795 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 19:53:39.689004  195795 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 19:53:39.689081  195795 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 19:53:39.697404  195795 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1006 19:53:39.710626  195795 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 19:53:39.724665  195795 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1006 19:53:39.738297  195795 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1006 19:53:39.742184  195795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:53:39.752247  195795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:53:39.876332  195795 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:53:39.894529  195795 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393 for IP: 192.168.85.2
	I1006 19:53:39.894603  195795 certs.go:195] generating shared ca certs ...
	I1006 19:53:39.894635  195795 certs.go:227] acquiring lock for ca certs: {Name:mke29bef1b13829052576090d66d8864f7cbc64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:53:39.894818  195795 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key
	I1006 19:53:39.894898  195795 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key
	I1006 19:53:39.894930  195795 certs.go:257] generating profile certs ...
	I1006 19:53:39.895010  195795 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/client.key
	I1006 19:53:39.895046  195795 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/client.crt with IP's: []
	I1006 19:53:40.327827  195795 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/client.crt ...
	I1006 19:53:40.327862  195795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/client.crt: {Name:mk8d8ec8ea49f480ebd6a87e1719f6f9bd5091a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:53:40.328056  195795 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/client.key ...
	I1006 19:53:40.328068  195795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/client.key: {Name:mkf94feb42520898db97382651a81687e8b463ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:53:40.328159  195795 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/apiserver.key.d7862f5c
	I1006 19:53:40.328175  195795 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/apiserver.crt.d7862f5c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1006 19:53:41.547269  195795 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/apiserver.crt.d7862f5c ...
	I1006 19:53:41.547302  195795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/apiserver.crt.d7862f5c: {Name:mk564d9cb84850fffa03ea289f311a9a60d74512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:53:41.547479  195795 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/apiserver.key.d7862f5c ...
	I1006 19:53:41.547494  195795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/apiserver.key.d7862f5c: {Name:mk74b1b330ced2180e6081bf4343880c866a65ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:53:41.547579  195795 certs.go:382] copying /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/apiserver.crt.d7862f5c -> /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/apiserver.crt
	I1006 19:53:41.547657  195795 certs.go:386] copying /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/apiserver.key.d7862f5c -> /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/apiserver.key
	I1006 19:53:41.547733  195795 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/proxy-client.key
	I1006 19:53:41.547753  195795 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/proxy-client.crt with IP's: []
	I1006 19:53:42.178247  195795 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/proxy-client.crt ...
	I1006 19:53:42.178284  195795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/proxy-client.crt: {Name:mk3feffbde468bbf94bb40bd710847eec9fa680f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:53:42.178499  195795 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/proxy-client.key ...
	I1006 19:53:42.178515  195795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/proxy-client.key: {Name:mkb22b23d12086f2a8dc2003b2f0ec41088d76df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:53:42.178737  195795 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem (1338 bytes)
	W1006 19:53:42.178798  195795 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350_empty.pem, impossibly tiny 0 bytes
	I1006 19:53:42.178807  195795 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 19:53:42.178832  195795 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem (1082 bytes)
	I1006 19:53:42.178861  195795 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem (1123 bytes)
	I1006 19:53:42.178888  195795 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem (1675 bytes)
	I1006 19:53:42.178939  195795 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:53:42.179565  195795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 19:53:42.206347  195795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 19:53:42.229627  195795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 19:53:42.251373  195795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 19:53:42.277516  195795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1006 19:53:42.306519  195795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 19:53:42.332271  195795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 19:53:42.352592  195795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 19:53:42.373179  195795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem --> /usr/share/ca-certificates/4350.pem (1338 bytes)
	I1006 19:53:42.390497  195795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /usr/share/ca-certificates/43502.pem (1708 bytes)
	I1006 19:53:42.419396  195795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 19:53:42.456072  195795 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 19:53:42.471605  195795 ssh_runner.go:195] Run: openssl version
	I1006 19:53:42.480281  195795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4350.pem && ln -fs /usr/share/ca-certificates/4350.pem /etc/ssl/certs/4350.pem"
	I1006 19:53:42.493241  195795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4350.pem
	I1006 19:53:42.497117  195795 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 18:49 /usr/share/ca-certificates/4350.pem
	I1006 19:53:42.497186  195795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4350.pem
	I1006 19:53:42.543536  195795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4350.pem /etc/ssl/certs/51391683.0"
	I1006 19:53:42.551765  195795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43502.pem && ln -fs /usr/share/ca-certificates/43502.pem /etc/ssl/certs/43502.pem"
	I1006 19:53:42.560020  195795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43502.pem
	I1006 19:53:42.563850  195795 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 18:49 /usr/share/ca-certificates/43502.pem
	I1006 19:53:42.563912  195795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43502.pem
	I1006 19:53:42.607485  195795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43502.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 19:53:42.616289  195795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 19:53:42.624522  195795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:53:42.628856  195795 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:53:42.628917  195795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:53:42.670566  195795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 19:53:42.679983  195795 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 19:53:42.684460  195795 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 19:53:42.684515  195795 kubeadm.go:400] StartCluster: {Name:embed-certs-830393 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-830393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:53:42.684586  195795 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 19:53:42.684654  195795 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 19:53:42.719028  195795 cri.go:89] found id: ""
	I1006 19:53:42.719101  195795 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 19:53:42.728505  195795 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 19:53:42.736646  195795 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 19:53:42.736736  195795 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 19:53:42.744842  195795 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 19:53:42.744859  195795 kubeadm.go:157] found existing configuration files:
	
	I1006 19:53:42.744933  195795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 19:53:42.752890  195795 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 19:53:42.752959  195795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 19:53:42.770853  195795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 19:53:42.782245  195795 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 19:53:42.782390  195795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 19:53:42.790929  195795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 19:53:42.801756  195795 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 19:53:42.801874  195795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 19:53:42.810405  195795 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 19:53:42.821251  195795 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 19:53:42.821368  195795 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 19:53:42.831598  195795 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 19:53:42.887004  195795 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 19:53:42.888740  195795 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 19:53:42.937868  195795 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 19:53:42.937994  195795 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1006 19:53:42.938063  195795 kubeadm.go:318] OS: Linux
	I1006 19:53:42.938132  195795 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 19:53:42.938239  195795 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1006 19:53:42.938324  195795 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 19:53:42.938415  195795 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 19:53:42.938472  195795 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 19:53:42.938532  195795 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 19:53:42.938582  195795 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 19:53:42.938633  195795 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 19:53:42.938683  195795 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1006 19:53:43.055918  195795 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 19:53:43.056101  195795 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 19:53:43.056253  195795 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 19:53:43.070909  195795 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1006 19:53:41.101220  192329 node_ready.go:57] node "no-preload-314275" has "Ready":"False" status (will retry)
	I1006 19:53:42.600008  192329 node_ready.go:49] node "no-preload-314275" is "Ready"
	I1006 19:53:42.600036  192329 node_ready.go:38] duration metric: took 14.503195456s for node "no-preload-314275" to be "Ready" ...
	I1006 19:53:42.600049  192329 api_server.go:52] waiting for apiserver process to appear ...
	I1006 19:53:42.600111  192329 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 19:53:42.614866  192329 api_server.go:72] duration metric: took 15.922999453s to wait for apiserver process to appear ...
	I1006 19:53:42.614888  192329 api_server.go:88] waiting for apiserver healthz status ...
	I1006 19:53:42.614907  192329 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1006 19:53:42.626462  192329 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1006 19:53:42.627641  192329 api_server.go:141] control plane version: v1.34.1
	I1006 19:53:42.627662  192329 api_server.go:131] duration metric: took 12.767959ms to wait for apiserver health ...
	I1006 19:53:42.627671  192329 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 19:53:42.632916  192329 system_pods.go:59] 8 kube-system pods found
	I1006 19:53:42.633012  192329 system_pods.go:61] "coredns-66bc5c9577-tccns" [3bc39edb-cfcd-483e-9107-1e53757c329d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:53:42.633048  192329 system_pods.go:61] "etcd-no-preload-314275" [1bc41062-9b68-4fbf-b275-37eadcd1bbaf] Running
	I1006 19:53:42.633054  192329 system_pods.go:61] "kindnet-b6hb7" [bcdd65eb-12b8-4b04-be54-5dd536ce6b7a] Running
	I1006 19:53:42.633070  192329 system_pods.go:61] "kube-apiserver-no-preload-314275" [126f4e3a-6a02-45ed-aa95-99483e2a91c0] Running
	I1006 19:53:42.633075  192329 system_pods.go:61] "kube-controller-manager-no-preload-314275" [1507f50d-94ad-4592-877a-826f1a6f2033] Running
	I1006 19:53:42.633080  192329 system_pods.go:61] "kube-proxy-nr6pc" [14e2cef3-72bf-4202-b91a-9b248e7b93ec] Running
	I1006 19:53:42.633084  192329 system_pods.go:61] "kube-scheduler-no-preload-314275" [ed7b7492-9ded-48c2-957f-c46ce40d1e14] Running
	I1006 19:53:42.633094  192329 system_pods.go:61] "storage-provisioner" [00c9a83a-e9f1-4db6-8b19-734ade7dec64] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 19:53:42.633118  192329 system_pods.go:74] duration metric: took 5.440923ms to wait for pod list to return data ...
	I1006 19:53:42.633126  192329 default_sa.go:34] waiting for default service account to be created ...
	I1006 19:53:42.644284  192329 default_sa.go:45] found service account: "default"
	I1006 19:53:42.644359  192329 default_sa.go:55] duration metric: took 11.225722ms for default service account to be created ...
	I1006 19:53:42.644383  192329 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 19:53:42.648317  192329 system_pods.go:86] 8 kube-system pods found
	I1006 19:53:42.648415  192329 system_pods.go:89] "coredns-66bc5c9577-tccns" [3bc39edb-cfcd-483e-9107-1e53757c329d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:53:42.648442  192329 system_pods.go:89] "etcd-no-preload-314275" [1bc41062-9b68-4fbf-b275-37eadcd1bbaf] Running
	I1006 19:53:42.648463  192329 system_pods.go:89] "kindnet-b6hb7" [bcdd65eb-12b8-4b04-be54-5dd536ce6b7a] Running
	I1006 19:53:42.648496  192329 system_pods.go:89] "kube-apiserver-no-preload-314275" [126f4e3a-6a02-45ed-aa95-99483e2a91c0] Running
	I1006 19:53:42.648519  192329 system_pods.go:89] "kube-controller-manager-no-preload-314275" [1507f50d-94ad-4592-877a-826f1a6f2033] Running
	I1006 19:53:42.648537  192329 system_pods.go:89] "kube-proxy-nr6pc" [14e2cef3-72bf-4202-b91a-9b248e7b93ec] Running
	I1006 19:53:42.648555  192329 system_pods.go:89] "kube-scheduler-no-preload-314275" [ed7b7492-9ded-48c2-957f-c46ce40d1e14] Running
	I1006 19:53:42.648592  192329 system_pods.go:89] "storage-provisioner" [00c9a83a-e9f1-4db6-8b19-734ade7dec64] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 19:53:42.648626  192329 retry.go:31] will retry after 222.78033ms: missing components: kube-dns
	I1006 19:53:42.877166  192329 system_pods.go:86] 8 kube-system pods found
	I1006 19:53:42.877247  192329 system_pods.go:89] "coredns-66bc5c9577-tccns" [3bc39edb-cfcd-483e-9107-1e53757c329d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:53:42.877269  192329 system_pods.go:89] "etcd-no-preload-314275" [1bc41062-9b68-4fbf-b275-37eadcd1bbaf] Running
	I1006 19:53:42.877290  192329 system_pods.go:89] "kindnet-b6hb7" [bcdd65eb-12b8-4b04-be54-5dd536ce6b7a] Running
	I1006 19:53:42.877317  192329 system_pods.go:89] "kube-apiserver-no-preload-314275" [126f4e3a-6a02-45ed-aa95-99483e2a91c0] Running
	I1006 19:53:42.877345  192329 system_pods.go:89] "kube-controller-manager-no-preload-314275" [1507f50d-94ad-4592-877a-826f1a6f2033] Running
	I1006 19:53:42.877365  192329 system_pods.go:89] "kube-proxy-nr6pc" [14e2cef3-72bf-4202-b91a-9b248e7b93ec] Running
	I1006 19:53:42.877391  192329 system_pods.go:89] "kube-scheduler-no-preload-314275" [ed7b7492-9ded-48c2-957f-c46ce40d1e14] Running
	I1006 19:53:42.877411  192329 system_pods.go:89] "storage-provisioner" [00c9a83a-e9f1-4db6-8b19-734ade7dec64] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 19:53:42.877440  192329 retry.go:31] will retry after 287.156256ms: missing components: kube-dns
	I1006 19:53:43.169363  192329 system_pods.go:86] 8 kube-system pods found
	I1006 19:53:43.169446  192329 system_pods.go:89] "coredns-66bc5c9577-tccns" [3bc39edb-cfcd-483e-9107-1e53757c329d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:53:43.169468  192329 system_pods.go:89] "etcd-no-preload-314275" [1bc41062-9b68-4fbf-b275-37eadcd1bbaf] Running
	I1006 19:53:43.169492  192329 system_pods.go:89] "kindnet-b6hb7" [bcdd65eb-12b8-4b04-be54-5dd536ce6b7a] Running
	I1006 19:53:43.169526  192329 system_pods.go:89] "kube-apiserver-no-preload-314275" [126f4e3a-6a02-45ed-aa95-99483e2a91c0] Running
	I1006 19:53:43.169546  192329 system_pods.go:89] "kube-controller-manager-no-preload-314275" [1507f50d-94ad-4592-877a-826f1a6f2033] Running
	I1006 19:53:43.169564  192329 system_pods.go:89] "kube-proxy-nr6pc" [14e2cef3-72bf-4202-b91a-9b248e7b93ec] Running
	I1006 19:53:43.169621  192329 system_pods.go:89] "kube-scheduler-no-preload-314275" [ed7b7492-9ded-48c2-957f-c46ce40d1e14] Running
	I1006 19:53:43.169642  192329 system_pods.go:89] "storage-provisioner" [00c9a83a-e9f1-4db6-8b19-734ade7dec64] Running
	I1006 19:53:43.169669  192329 retry.go:31] will retry after 432.644396ms: missing components: kube-dns
	I1006 19:53:43.608097  192329 system_pods.go:86] 8 kube-system pods found
	I1006 19:53:43.608195  192329 system_pods.go:89] "coredns-66bc5c9577-tccns" [3bc39edb-cfcd-483e-9107-1e53757c329d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:53:43.608219  192329 system_pods.go:89] "etcd-no-preload-314275" [1bc41062-9b68-4fbf-b275-37eadcd1bbaf] Running
	I1006 19:53:43.608240  192329 system_pods.go:89] "kindnet-b6hb7" [bcdd65eb-12b8-4b04-be54-5dd536ce6b7a] Running
	I1006 19:53:43.608275  192329 system_pods.go:89] "kube-apiserver-no-preload-314275" [126f4e3a-6a02-45ed-aa95-99483e2a91c0] Running
	I1006 19:53:43.608298  192329 system_pods.go:89] "kube-controller-manager-no-preload-314275" [1507f50d-94ad-4592-877a-826f1a6f2033] Running
	I1006 19:53:43.608318  192329 system_pods.go:89] "kube-proxy-nr6pc" [14e2cef3-72bf-4202-b91a-9b248e7b93ec] Running
	I1006 19:53:43.608345  192329 system_pods.go:89] "kube-scheduler-no-preload-314275" [ed7b7492-9ded-48c2-957f-c46ce40d1e14] Running
	I1006 19:53:43.608371  192329 system_pods.go:89] "storage-provisioner" [00c9a83a-e9f1-4db6-8b19-734ade7dec64] Running
	I1006 19:53:43.608401  192329 retry.go:31] will retry after 435.686409ms: missing components: kube-dns
	I1006 19:53:43.076011  195795 out.go:252]   - Generating certificates and keys ...
	I1006 19:53:43.076131  195795 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 19:53:43.076255  195795 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 19:53:43.221597  195795 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 19:53:43.610103  195795 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 19:53:43.923765  195795 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 19:53:44.262736  195795 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 19:53:44.627273  195795 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 19:53:44.627634  195795 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-830393 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1006 19:53:44.050671  192329 system_pods.go:86] 8 kube-system pods found
	I1006 19:53:44.050755  192329 system_pods.go:89] "coredns-66bc5c9577-tccns" [3bc39edb-cfcd-483e-9107-1e53757c329d] Running
	I1006 19:53:44.050779  192329 system_pods.go:89] "etcd-no-preload-314275" [1bc41062-9b68-4fbf-b275-37eadcd1bbaf] Running
	I1006 19:53:44.050825  192329 system_pods.go:89] "kindnet-b6hb7" [bcdd65eb-12b8-4b04-be54-5dd536ce6b7a] Running
	I1006 19:53:44.050857  192329 system_pods.go:89] "kube-apiserver-no-preload-314275" [126f4e3a-6a02-45ed-aa95-99483e2a91c0] Running
	I1006 19:53:44.050884  192329 system_pods.go:89] "kube-controller-manager-no-preload-314275" [1507f50d-94ad-4592-877a-826f1a6f2033] Running
	I1006 19:53:44.050922  192329 system_pods.go:89] "kube-proxy-nr6pc" [14e2cef3-72bf-4202-b91a-9b248e7b93ec] Running
	I1006 19:53:44.050946  192329 system_pods.go:89] "kube-scheduler-no-preload-314275" [ed7b7492-9ded-48c2-957f-c46ce40d1e14] Running
	I1006 19:53:44.050964  192329 system_pods.go:89] "storage-provisioner" [00c9a83a-e9f1-4db6-8b19-734ade7dec64] Running
	I1006 19:53:44.051022  192329 system_pods.go:126] duration metric: took 1.406619952s to wait for k8s-apps to be running ...
	I1006 19:53:44.051056  192329 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 19:53:44.051174  192329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:53:44.065992  192329 system_svc.go:56] duration metric: took 14.927891ms WaitForService to wait for kubelet
	I1006 19:53:44.066065  192329 kubeadm.go:586] duration metric: took 17.374210813s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 19:53:44.066100  192329 node_conditions.go:102] verifying NodePressure condition ...
	I1006 19:53:44.069959  192329 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 19:53:44.070036  192329 node_conditions.go:123] node cpu capacity is 2
	I1006 19:53:44.070065  192329 node_conditions.go:105] duration metric: took 3.947261ms to run NodePressure ...
	I1006 19:53:44.070090  192329 start.go:241] waiting for startup goroutines ...
	I1006 19:53:44.070120  192329 start.go:246] waiting for cluster config update ...
	I1006 19:53:44.070146  192329 start.go:255] writing updated cluster config ...
	I1006 19:53:44.070465  192329 ssh_runner.go:195] Run: rm -f paused
	I1006 19:53:44.074598  192329 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 19:53:44.080571  192329 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tccns" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:53:44.086410  192329 pod_ready.go:94] pod "coredns-66bc5c9577-tccns" is "Ready"
	I1006 19:53:44.086491  192329 pod_ready.go:86] duration metric: took 5.842547ms for pod "coredns-66bc5c9577-tccns" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:53:44.089491  192329 pod_ready.go:83] waiting for pod "etcd-no-preload-314275" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:53:44.097031  192329 pod_ready.go:94] pod "etcd-no-preload-314275" is "Ready"
	I1006 19:53:44.097111  192329 pod_ready.go:86] duration metric: took 7.537281ms for pod "etcd-no-preload-314275" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:53:44.099880  192329 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-314275" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:53:44.109037  192329 pod_ready.go:94] pod "kube-apiserver-no-preload-314275" is "Ready"
	I1006 19:53:44.109114  192329 pod_ready.go:86] duration metric: took 9.143068ms for pod "kube-apiserver-no-preload-314275" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:53:44.112508  192329 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-314275" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:53:44.481089  192329 pod_ready.go:94] pod "kube-controller-manager-no-preload-314275" is "Ready"
	I1006 19:53:44.481185  192329 pod_ready.go:86] duration metric: took 368.594465ms for pod "kube-controller-manager-no-preload-314275" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:53:44.680926  192329 pod_ready.go:83] waiting for pod "kube-proxy-nr6pc" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:53:45.082950  192329 pod_ready.go:94] pod "kube-proxy-nr6pc" is "Ready"
	I1006 19:53:45.082991  192329 pod_ready.go:86] duration metric: took 402.036625ms for pod "kube-proxy-nr6pc" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:53:45.293507  192329 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-314275" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:53:45.681590  192329 pod_ready.go:94] pod "kube-scheduler-no-preload-314275" is "Ready"
	I1006 19:53:45.681629  192329 pod_ready.go:86] duration metric: took 388.091456ms for pod "kube-scheduler-no-preload-314275" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:53:45.681640  192329 pod_ready.go:40] duration metric: took 1.606968569s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 19:53:45.751097  192329 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1006 19:53:45.754421  192329 out.go:179] * Done! kubectl is now configured to use "no-preload-314275" cluster and "default" namespace by default
	I1006 19:53:45.310219  195795 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 19:53:45.310888  195795 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-830393 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1006 19:53:45.905718  195795 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 19:53:46.663783  195795 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 19:53:47.058683  195795 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 19:53:47.059187  195795 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 19:53:47.205395  195795 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 19:53:47.258853  195795 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 19:53:48.467032  195795 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 19:53:49.215751  195795 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 19:53:49.600524  195795 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 19:53:49.601556  195795 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 19:53:49.604653  195795 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 19:53:49.608076  195795 out.go:252]   - Booting up control plane ...
	I1006 19:53:49.608193  195795 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 19:53:49.608288  195795 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 19:53:49.609158  195795 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 19:53:49.626059  195795 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 19:53:49.626451  195795 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 19:53:49.634699  195795 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 19:53:49.635045  195795 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 19:53:49.635327  195795 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 19:53:49.777908  195795 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 19:53:49.778347  195795 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 19:53:50.789584  195795 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.010719357s
	I1006 19:53:50.794748  195795 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 19:53:50.795910  195795 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1006 19:53:50.796403  195795 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 19:53:50.796754  195795 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Oct 06 19:53:42 no-preload-314275 crio[833]: time="2025-10-06T19:53:42.910248618Z" level=info msg="Created container 3db9ce2bab7af0a69fadde601f3cb9fc4dcd17f17c87e81fc07ba231ca1e7f33: kube-system/coredns-66bc5c9577-tccns/coredns" id=de5864c1-c055-47a3-be68-d5baea41e8c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:53:42 no-preload-314275 crio[833]: time="2025-10-06T19:53:42.911387813Z" level=info msg="Starting container: 3db9ce2bab7af0a69fadde601f3cb9fc4dcd17f17c87e81fc07ba231ca1e7f33" id=fe3387a2-b2b1-4020-88a1-e14086098a25 name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 19:53:42 no-preload-314275 crio[833]: time="2025-10-06T19:53:42.918719116Z" level=info msg="Started container" PID=2492 containerID=3db9ce2bab7af0a69fadde601f3cb9fc4dcd17f17c87e81fc07ba231ca1e7f33 description=kube-system/coredns-66bc5c9577-tccns/coredns id=fe3387a2-b2b1-4020-88a1-e14086098a25 name=/runtime.v1.RuntimeService/StartContainer sandboxID=adf16264ed40114ebdd63f53eefd0b8e088730a739ec7d2c27fbafc2669a8906
	Oct 06 19:53:46 no-preload-314275 crio[833]: time="2025-10-06T19:53:46.33554078Z" level=info msg="Running pod sandbox: default/busybox/POD" id=d1c5bf94-776e-43c1-8d3c-d39dcd820317 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 19:53:46 no-preload-314275 crio[833]: time="2025-10-06T19:53:46.335608695Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:53:46 no-preload-314275 crio[833]: time="2025-10-06T19:53:46.352311067Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c3bb8b995ebe3c096042715d10f850bc7f8593a9e1628400b4c288e71d64062a UID:455522a4-5398-4b39-bd9c-3c0361fb193f NetNS:/var/run/netns/e10abf28-76c1-4d0d-b5a3-724d7d8a1bdc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079e58}] Aliases:map[]}"
	Oct 06 19:53:46 no-preload-314275 crio[833]: time="2025-10-06T19:53:46.352349541Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 06 19:53:46 no-preload-314275 crio[833]: time="2025-10-06T19:53:46.363138792Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c3bb8b995ebe3c096042715d10f850bc7f8593a9e1628400b4c288e71d64062a UID:455522a4-5398-4b39-bd9c-3c0361fb193f NetNS:/var/run/netns/e10abf28-76c1-4d0d-b5a3-724d7d8a1bdc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079e58}] Aliases:map[]}"
	Oct 06 19:53:46 no-preload-314275 crio[833]: time="2025-10-06T19:53:46.363307008Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 06 19:53:46 no-preload-314275 crio[833]: time="2025-10-06T19:53:46.365869884Z" level=info msg="Ran pod sandbox c3bb8b995ebe3c096042715d10f850bc7f8593a9e1628400b4c288e71d64062a with infra container: default/busybox/POD" id=d1c5bf94-776e-43c1-8d3c-d39dcd820317 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 19:53:46 no-preload-314275 crio[833]: time="2025-10-06T19:53:46.368481892Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e46765cf-8a6c-493d-9a6b-291fb85b891c name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:53:46 no-preload-314275 crio[833]: time="2025-10-06T19:53:46.368909461Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e46765cf-8a6c-493d-9a6b-291fb85b891c name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:53:46 no-preload-314275 crio[833]: time="2025-10-06T19:53:46.369078498Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e46765cf-8a6c-493d-9a6b-291fb85b891c name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:53:46 no-preload-314275 crio[833]: time="2025-10-06T19:53:46.370852231Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a8fd10b4-f3f3-46e6-8923-d52fd39eae98 name=/runtime.v1.ImageService/PullImage
	Oct 06 19:53:46 no-preload-314275 crio[833]: time="2025-10-06T19:53:46.380269594Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 06 19:53:48 no-preload-314275 crio[833]: time="2025-10-06T19:53:48.237729097Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=a8fd10b4-f3f3-46e6-8923-d52fd39eae98 name=/runtime.v1.ImageService/PullImage
	Oct 06 19:53:48 no-preload-314275 crio[833]: time="2025-10-06T19:53:48.23848541Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=36d049e2-dfc9-494a-8fa7-d36c5be0d598 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:53:48 no-preload-314275 crio[833]: time="2025-10-06T19:53:48.242837749Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=19ef03da-8da4-43d6-83a5-d7e680b7a5cf name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:53:48 no-preload-314275 crio[833]: time="2025-10-06T19:53:48.250036102Z" level=info msg="Creating container: default/busybox/busybox" id=4188f1c4-f3fa-4ad1-9951-b32640e0ecaf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:53:48 no-preload-314275 crio[833]: time="2025-10-06T19:53:48.250928827Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:53:48 no-preload-314275 crio[833]: time="2025-10-06T19:53:48.255551078Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:53:48 no-preload-314275 crio[833]: time="2025-10-06T19:53:48.256162035Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:53:48 no-preload-314275 crio[833]: time="2025-10-06T19:53:48.278851457Z" level=info msg="Created container fa2df0404d2378b2d122304145a586f2a9c28298937b8e343fafde6b58c8f128: default/busybox/busybox" id=4188f1c4-f3fa-4ad1-9951-b32640e0ecaf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:53:48 no-preload-314275 crio[833]: time="2025-10-06T19:53:48.282775209Z" level=info msg="Starting container: fa2df0404d2378b2d122304145a586f2a9c28298937b8e343fafde6b58c8f128" id=307047a2-ff41-45be-9ee1-b71ad380b0b2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 19:53:48 no-preload-314275 crio[833]: time="2025-10-06T19:53:48.290282155Z" level=info msg="Started container" PID=2545 containerID=fa2df0404d2378b2d122304145a586f2a9c28298937b8e343fafde6b58c8f128 description=default/busybox/busybox id=307047a2-ff41-45be-9ee1-b71ad380b0b2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c3bb8b995ebe3c096042715d10f850bc7f8593a9e1628400b4c288e71d64062a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	fa2df0404d237       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   c3bb8b995ebe3       busybox                                     default
	3db9ce2bab7af       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago      Running             coredns                   0                   adf16264ed401       coredns-66bc5c9577-tccns                    kube-system
	58536f2dc03f2       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      14 seconds ago      Running             storage-provisioner       0                   d4b2a5095cef4       storage-provisioner                         kube-system
	cdccd9118144f       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    25 seconds ago      Running             kindnet-cni               0                   4499aa88a90a3       kindnet-b6hb7                               kube-system
	e33ee5b29b3f8       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      29 seconds ago      Running             kube-proxy                0                   fa609c9066260       kube-proxy-nr6pc                            kube-system
	e5c5e1d59e179       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      47 seconds ago      Running             kube-scheduler            0                   f5e08b62cbb2b       kube-scheduler-no-preload-314275            kube-system
	5081aa8effacd       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      47 seconds ago      Running             etcd                      0                   234dd857ec172       etcd-no-preload-314275                      kube-system
	9a28215b3b27e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      47 seconds ago      Running             kube-apiserver            0                   73f6d87c744d4       kube-apiserver-no-preload-314275            kube-system
	afaa74d57eb90       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      47 seconds ago      Running             kube-controller-manager   0                   0391aa67fbc19       kube-controller-manager-no-preload-314275   kube-system
	
	
	==> coredns [3db9ce2bab7af0a69fadde601f3cb9fc4dcd17f17c87e81fc07ba231ca1e7f33] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40711 - 22364 "HINFO IN 7478046780428639041.1507401024354662512. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016674301s
	
	
	==> describe nodes <==
	Name:               no-preload-314275
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-314275
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81
	                    minikube.k8s.io/name=no-preload-314275
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_06T19_53_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 06 Oct 2025 19:53:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-314275
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 06 Oct 2025 19:53:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 06 Oct 2025 19:53:52 +0000   Mon, 06 Oct 2025 19:53:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 06 Oct 2025 19:53:52 +0000   Mon, 06 Oct 2025 19:53:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 06 Oct 2025 19:53:52 +0000   Mon, 06 Oct 2025 19:53:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 06 Oct 2025 19:53:52 +0000   Mon, 06 Oct 2025 19:53:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-314275
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 d453a036e29540cd953f639a3f1a7ffd
	  System UUID:                063eafb6-36b6-4179-b2e4-ad5bbf368dcb
	  Boot ID:                    28123a58-a718-41b5-bac7-da83ff2d3a0d
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-tccns                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     31s
	  kube-system                 etcd-no-preload-314275                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         38s
	  kube-system                 kindnet-b6hb7                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-no-preload-314275             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-no-preload-314275    200m (10%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-nr6pc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-no-preload-314275             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 29s                kube-proxy       
	  Warning  CgroupV1                 49s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  49s (x8 over 49s)  kubelet          Node no-preload-314275 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    49s (x8 over 49s)  kubelet          Node no-preload-314275 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     49s (x8 over 49s)  kubelet          Node no-preload-314275 status is now: NodeHasSufficientPID
	  Normal   Starting                 36s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 36s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  36s                kubelet          Node no-preload-314275 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    36s                kubelet          Node no-preload-314275 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     36s                kubelet          Node no-preload-314275 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           32s                node-controller  Node no-preload-314275 event: Registered Node no-preload-314275 in Controller
	  Normal   NodeReady                15s                kubelet          Node no-preload-314275 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 6 19:21] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:23] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:26] overlayfs: idmapped layers are currently not supported
	[ +26.009516] overlayfs: idmapped layers are currently not supported
	[  +1.884868] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:28] overlayfs: idmapped layers are currently not supported
	[ +38.864395] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:29] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:30] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:31] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:35] overlayfs: idmapped layers are currently not supported
	[ +30.685217] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:37] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:39] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:41] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:48] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:49] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:50] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:52] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:53] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5081aa8effacd595c405d580473ff95f9c528f73b1d804792b006a2bdb58f73f] <==
	{"level":"warn","ts":"2025-10-06T19:53:14.664530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:14.700178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:14.760017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:14.774170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:14.787650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:14.818413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:14.855831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:14.876120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:14.910772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:14.955255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:14.990083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:15.019895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:15.053757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:15.089886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:15.124056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:15.164206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:15.195319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:15.229502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:15.269583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:15.300720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:15.346789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:15.375958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:15.420721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:15.448603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:15.638485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35086","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:53:57 up  1:36,  0 user,  load average: 4.15, 2.25, 1.84
	Linux no-preload-314275 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cdccd9118144f5c0484daf20ebe1db213b90f245eb2d54415e44a14201265ff7] <==
	I1006 19:53:32.051090       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1006 19:53:32.054187       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1006 19:53:32.055226       1 main.go:148] setting mtu 1500 for CNI 
	I1006 19:53:32.055248       1 main.go:178] kindnetd IP family: "ipv4"
	I1006 19:53:32.055264       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-06T19:53:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1006 19:53:32.255685       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1006 19:53:32.255789       1 controller.go:381] "Waiting for informer caches to sync"
	I1006 19:53:32.345391       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1006 19:53:32.364080       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1006 19:53:32.548270       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1006 19:53:32.548363       1 metrics.go:72] Registering metrics
	I1006 19:53:32.548724       1 controller.go:711] "Syncing nftables rules"
	I1006 19:53:42.263791       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1006 19:53:42.263856       1 main.go:301] handling current node
	I1006 19:53:52.256979       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1006 19:53:52.257019       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9a28215b3b27eec2a6c4b6fd13af6f94009699a4480960fede19bc2fc5694835] <==
	I1006 19:53:17.410750       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1006 19:53:17.411122       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1006 19:53:17.457234       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1006 19:53:17.505278       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1006 19:53:17.505339       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1006 19:53:17.548146       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1006 19:53:17.556752       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1006 19:53:17.666394       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1006 19:53:17.870327       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1006 19:53:17.903001       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1006 19:53:17.903041       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1006 19:53:19.566212       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1006 19:53:19.643812       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1006 19:53:19.790537       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1006 19:53:19.802184       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1006 19:53:19.803452       1 controller.go:667] quota admission added evaluator for: endpoints
	I1006 19:53:19.816399       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1006 19:53:20.295499       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1006 19:53:21.340178       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1006 19:53:21.369120       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1006 19:53:21.390537       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1006 19:53:25.394430       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1006 19:53:26.142202       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1006 19:53:26.150562       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1006 19:53:26.386883       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [afaa74d57eb90392f53407a97c861155a40abcae09d4ca7c56ede2ac261a3727] <==
	I1006 19:53:25.319953       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1006 19:53:25.324759       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1006 19:53:25.324966       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 19:53:25.324981       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1006 19:53:25.324989       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1006 19:53:25.329906       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1006 19:53:25.331605       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1006 19:53:25.331850       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1006 19:53:25.333146       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1006 19:53:25.333178       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1006 19:53:25.333216       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1006 19:53:25.334390       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1006 19:53:25.334721       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1006 19:53:25.334829       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 19:53:25.337695       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1006 19:53:25.338376       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1006 19:53:25.340296       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1006 19:53:25.345701       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1006 19:53:25.354128       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1006 19:53:25.354191       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1006 19:53:25.354252       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1006 19:53:25.354263       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1006 19:53:25.354269       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1006 19:53:25.371983       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-314275" podCIDRs=["10.244.0.0/24"]
	I1006 19:53:45.287942       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e33ee5b29b3f802cd6c1e9ad1894e442041c0b40c2fbdc527e72abae797a6453] <==
	I1006 19:53:27.609034       1 server_linux.go:53] "Using iptables proxy"
	I1006 19:53:27.799468       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1006 19:53:27.900255       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1006 19:53:27.900296       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1006 19:53:27.900440       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1006 19:53:28.052905       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 19:53:28.053037       1 server_linux.go:132] "Using iptables Proxier"
	I1006 19:53:28.058587       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1006 19:53:28.059110       1 server.go:527] "Version info" version="v1.34.1"
	I1006 19:53:28.059316       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:53:28.060992       1 config.go:200] "Starting service config controller"
	I1006 19:53:28.061059       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1006 19:53:28.061101       1 config.go:106] "Starting endpoint slice config controller"
	I1006 19:53:28.061131       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1006 19:53:28.061176       1 config.go:403] "Starting serviceCIDR config controller"
	I1006 19:53:28.061264       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1006 19:53:28.062054       1 config.go:309] "Starting node config controller"
	I1006 19:53:28.062105       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1006 19:53:28.062134       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1006 19:53:28.165588       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1006 19:53:28.165639       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1006 19:53:28.165687       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e5c5e1d59e1792dd99101e332343d35f81db7cee6cbe1dd471a891e2226b6c6b] <==
	I1006 19:53:16.954249       1 serving.go:386] Generated self-signed cert in-memory
	W1006 19:53:19.196185       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1006 19:53:19.201780       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1006 19:53:19.201870       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1006 19:53:19.201915       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1006 19:53:19.267934       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1006 19:53:19.285321       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:53:19.289338       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1006 19:53:19.294388       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:53:19.294491       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:53:19.294535       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1006 19:53:19.326805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1006 19:53:20.195246       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 06 19:53:26 no-preload-314275 kubelet[2007]: E1006 19:53:26.470441    2007 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:no-preload-314275\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-314275' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 06 19:53:26 no-preload-314275 kubelet[2007]: I1006 19:53:26.498313    2007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/14e2cef3-72bf-4202-b91a-9b248e7b93ec-kube-proxy\") pod \"kube-proxy-nr6pc\" (UID: \"14e2cef3-72bf-4202-b91a-9b248e7b93ec\") " pod="kube-system/kube-proxy-nr6pc"
	Oct 06 19:53:26 no-preload-314275 kubelet[2007]: I1006 19:53:26.498383    2007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/bcdd65eb-12b8-4b04-be54-5dd536ce6b7a-cni-cfg\") pod \"kindnet-b6hb7\" (UID: \"bcdd65eb-12b8-4b04-be54-5dd536ce6b7a\") " pod="kube-system/kindnet-b6hb7"
	Oct 06 19:53:26 no-preload-314275 kubelet[2007]: I1006 19:53:26.498416    2007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bcdd65eb-12b8-4b04-be54-5dd536ce6b7a-xtables-lock\") pod \"kindnet-b6hb7\" (UID: \"bcdd65eb-12b8-4b04-be54-5dd536ce6b7a\") " pod="kube-system/kindnet-b6hb7"
	Oct 06 19:53:26 no-preload-314275 kubelet[2007]: I1006 19:53:26.498435    2007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bcdd65eb-12b8-4b04-be54-5dd536ce6b7a-lib-modules\") pod \"kindnet-b6hb7\" (UID: \"bcdd65eb-12b8-4b04-be54-5dd536ce6b7a\") " pod="kube-system/kindnet-b6hb7"
	Oct 06 19:53:26 no-preload-314275 kubelet[2007]: I1006 19:53:26.498457    2007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxrtv\" (UniqueName: \"kubernetes.io/projected/bcdd65eb-12b8-4b04-be54-5dd536ce6b7a-kube-api-access-gxrtv\") pod \"kindnet-b6hb7\" (UID: \"bcdd65eb-12b8-4b04-be54-5dd536ce6b7a\") " pod="kube-system/kindnet-b6hb7"
	Oct 06 19:53:26 no-preload-314275 kubelet[2007]: I1006 19:53:26.498475    2007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14e2cef3-72bf-4202-b91a-9b248e7b93ec-xtables-lock\") pod \"kube-proxy-nr6pc\" (UID: \"14e2cef3-72bf-4202-b91a-9b248e7b93ec\") " pod="kube-system/kube-proxy-nr6pc"
	Oct 06 19:53:26 no-preload-314275 kubelet[2007]: I1006 19:53:26.498531    2007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14e2cef3-72bf-4202-b91a-9b248e7b93ec-lib-modules\") pod \"kube-proxy-nr6pc\" (UID: \"14e2cef3-72bf-4202-b91a-9b248e7b93ec\") " pod="kube-system/kube-proxy-nr6pc"
	Oct 06 19:53:26 no-preload-314275 kubelet[2007]: I1006 19:53:26.498585    2007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnl7l\" (UniqueName: \"kubernetes.io/projected/14e2cef3-72bf-4202-b91a-9b248e7b93ec-kube-api-access-hnl7l\") pod \"kube-proxy-nr6pc\" (UID: \"14e2cef3-72bf-4202-b91a-9b248e7b93ec\") " pod="kube-system/kube-proxy-nr6pc"
	Oct 06 19:53:27 no-preload-314275 kubelet[2007]: I1006 19:53:27.338818    2007 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 06 19:53:27 no-preload-314275 kubelet[2007]: W1006 19:53:27.371781    2007 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3b7c30b4fccfef647cdb0e7ecc61769f9844c54f8180f8640cdce7f83fc9c9ab/crio-fa609c9066260ddd5470bea767579d21ff6b76426f33533d12ad7fc832426ae5 WatchSource:0}: Error finding container fa609c9066260ddd5470bea767579d21ff6b76426f33533d12ad7fc832426ae5: Status 404 returned error can't find the container with id fa609c9066260ddd5470bea767579d21ff6b76426f33533d12ad7fc832426ae5
	Oct 06 19:53:27 no-preload-314275 kubelet[2007]: W1006 19:53:27.650791    2007 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3b7c30b4fccfef647cdb0e7ecc61769f9844c54f8180f8640cdce7f83fc9c9ab/crio-4499aa88a90a3d0564e2633db8433da1d5368071c986a27d19868a496b7d7735 WatchSource:0}: Error finding container 4499aa88a90a3d0564e2633db8433da1d5368071c986a27d19868a496b7d7735: Status 404 returned error can't find the container with id 4499aa88a90a3d0564e2633db8433da1d5368071c986a27d19868a496b7d7735
	Oct 06 19:53:27 no-preload-314275 kubelet[2007]: I1006 19:53:27.879328    2007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nr6pc" podStartSLOduration=1.8792718800000001 podStartE2EDuration="1.87927188s" podCreationTimestamp="2025-10-06 19:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 19:53:27.879281037 +0000 UTC m=+6.599436503" watchObservedRunningTime="2025-10-06 19:53:27.87927188 +0000 UTC m=+6.599427263"
	Oct 06 19:53:31 no-preload-314275 kubelet[2007]: E1006 19:53:31.955397    2007 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/systemd-tmpfiles-clean.service\": RecentStats: unable to find data in memory cache]"
	Oct 06 19:53:42 no-preload-314275 kubelet[2007]: I1006 19:53:42.398655    2007 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 06 19:53:42 no-preload-314275 kubelet[2007]: I1006 19:53:42.443964    2007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-b6hb7" podStartSLOduration=12.221701881 podStartE2EDuration="16.443947092s" podCreationTimestamp="2025-10-06 19:53:26 +0000 UTC" firstStartedPulling="2025-10-06 19:53:27.679438032 +0000 UTC m=+6.399593415" lastFinishedPulling="2025-10-06 19:53:31.901683252 +0000 UTC m=+10.621838626" observedRunningTime="2025-10-06 19:53:32.904045252 +0000 UTC m=+11.624200635" watchObservedRunningTime="2025-10-06 19:53:42.443947092 +0000 UTC m=+21.164102491"
	Oct 06 19:53:42 no-preload-314275 kubelet[2007]: I1006 19:53:42.571399    2007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bc39edb-cfcd-483e-9107-1e53757c329d-config-volume\") pod \"coredns-66bc5c9577-tccns\" (UID: \"3bc39edb-cfcd-483e-9107-1e53757c329d\") " pod="kube-system/coredns-66bc5c9577-tccns"
	Oct 06 19:53:42 no-preload-314275 kubelet[2007]: I1006 19:53:42.571459    2007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q962d\" (UniqueName: \"kubernetes.io/projected/3bc39edb-cfcd-483e-9107-1e53757c329d-kube-api-access-q962d\") pod \"coredns-66bc5c9577-tccns\" (UID: \"3bc39edb-cfcd-483e-9107-1e53757c329d\") " pod="kube-system/coredns-66bc5c9577-tccns"
	Oct 06 19:53:42 no-preload-314275 kubelet[2007]: I1006 19:53:42.571491    2007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/00c9a83a-e9f1-4db6-8b19-734ade7dec64-tmp\") pod \"storage-provisioner\" (UID: \"00c9a83a-e9f1-4db6-8b19-734ade7dec64\") " pod="kube-system/storage-provisioner"
	Oct 06 19:53:42 no-preload-314275 kubelet[2007]: I1006 19:53:42.571584    2007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxrhb\" (UniqueName: \"kubernetes.io/projected/00c9a83a-e9f1-4db6-8b19-734ade7dec64-kube-api-access-mxrhb\") pod \"storage-provisioner\" (UID: \"00c9a83a-e9f1-4db6-8b19-734ade7dec64\") " pod="kube-system/storage-provisioner"
	Oct 06 19:53:42 no-preload-314275 kubelet[2007]: W1006 19:53:42.771356    2007 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3b7c30b4fccfef647cdb0e7ecc61769f9844c54f8180f8640cdce7f83fc9c9ab/crio-d4b2a5095cef4b4f7014a631128ef44b911e1f990a79c68c777b5113d6057735 WatchSource:0}: Error finding container d4b2a5095cef4b4f7014a631128ef44b911e1f990a79c68c777b5113d6057735: Status 404 returned error can't find the container with id d4b2a5095cef4b4f7014a631128ef44b911e1f990a79c68c777b5113d6057735
	Oct 06 19:53:42 no-preload-314275 kubelet[2007]: W1006 19:53:42.808113    2007 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3b7c30b4fccfef647cdb0e7ecc61769f9844c54f8180f8640cdce7f83fc9c9ab/crio-adf16264ed40114ebdd63f53eefd0b8e088730a739ec7d2c27fbafc2669a8906 WatchSource:0}: Error finding container adf16264ed40114ebdd63f53eefd0b8e088730a739ec7d2c27fbafc2669a8906: Status 404 returned error can't find the container with id adf16264ed40114ebdd63f53eefd0b8e088730a739ec7d2c27fbafc2669a8906
	Oct 06 19:53:42 no-preload-314275 kubelet[2007]: I1006 19:53:42.945676    2007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.945641473 podStartE2EDuration="14.945641473s" podCreationTimestamp="2025-10-06 19:53:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 19:53:42.94520615 +0000 UTC m=+21.665361525" watchObservedRunningTime="2025-10-06 19:53:42.945641473 +0000 UTC m=+21.665796847"
	Oct 06 19:53:43 no-preload-314275 kubelet[2007]: I1006 19:53:43.959870    2007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-tccns" podStartSLOduration=17.959853346 podStartE2EDuration="17.959853346s" podCreationTimestamp="2025-10-06 19:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 19:53:43.923234295 +0000 UTC m=+22.643389678" watchObservedRunningTime="2025-10-06 19:53:43.959853346 +0000 UTC m=+22.680008721"
	Oct 06 19:53:46 no-preload-314275 kubelet[2007]: I1006 19:53:46.103981    2007 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcwp5\" (UniqueName: \"kubernetes.io/projected/455522a4-5398-4b39-bd9c-3c0361fb193f-kube-api-access-fcwp5\") pod \"busybox\" (UID: \"455522a4-5398-4b39-bd9c-3c0361fb193f\") " pod="default/busybox"
	
	
	==> storage-provisioner [58536f2dc03f2f0cd098186537bf7990f2a8027ef40b4d993d3239d020bca5fb] <==
	I1006 19:53:42.879090       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1006 19:53:42.964713       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1006 19:53:42.964846       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1006 19:53:42.969882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:53:43.010445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1006 19:53:43.010656       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1006 19:53:43.010843       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-314275_b6eb351f-40ed-46ef-82e9-6fc993356a87!
	I1006 19:53:43.013931       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b6bc9f45-80c8-40cf-884b-42fa87f06e10", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-314275_b6eb351f-40ed-46ef-82e9-6fc993356a87 became leader
	W1006 19:53:43.021417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:53:43.031101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1006 19:53:43.113257       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-314275_b6eb351f-40ed-46ef-82e9-6fc993356a87!
	W1006 19:53:45.041892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:53:45.053239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:53:47.056683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:53:47.066224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:53:49.070246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:53:49.080904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:53:51.084586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:53:51.089289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:53:53.093055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:53:53.102710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:53:55.108234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:53:55.122544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:53:57.126274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:53:57.138052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-314275 -n no-preload-314275
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-314275 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-830393 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-830393 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (268.787041ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:54:57Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-830393 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-830393 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-830393 describe deploy/metrics-server -n kube-system: exit status 1 (89.73665ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-830393 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-830393
helpers_test.go:243: (dbg) docker inspect embed-certs-830393:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "db0504489522a7f76847d7ee94754fe876f97596d0aefbe180d9ae204c670a0a",
	        "Created": "2025-10-06T19:53:31.962897615Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 196547,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T19:53:32.031072214Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/db0504489522a7f76847d7ee94754fe876f97596d0aefbe180d9ae204c670a0a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/db0504489522a7f76847d7ee94754fe876f97596d0aefbe180d9ae204c670a0a/hostname",
	        "HostsPath": "/var/lib/docker/containers/db0504489522a7f76847d7ee94754fe876f97596d0aefbe180d9ae204c670a0a/hosts",
	        "LogPath": "/var/lib/docker/containers/db0504489522a7f76847d7ee94754fe876f97596d0aefbe180d9ae204c670a0a/db0504489522a7f76847d7ee94754fe876f97596d0aefbe180d9ae204c670a0a-json.log",
	        "Name": "/embed-certs-830393",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-830393:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-830393",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "db0504489522a7f76847d7ee94754fe876f97596d0aefbe180d9ae204c670a0a",
	                "LowerDir": "/var/lib/docker/overlay2/bd523f7fa06a7e0ad89b9a8ced25801bcb8b1dd5b3445e1afcebc1a259cb4596-init/diff:/var/lib/docker/overlay2/950dad8a7486c25883a9845454dded3949e8f3a53166d005e6749ff210bcab80/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bd523f7fa06a7e0ad89b9a8ced25801bcb8b1dd5b3445e1afcebc1a259cb4596/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bd523f7fa06a7e0ad89b9a8ced25801bcb8b1dd5b3445e1afcebc1a259cb4596/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bd523f7fa06a7e0ad89b9a8ced25801bcb8b1dd5b3445e1afcebc1a259cb4596/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-830393",
	                "Source": "/var/lib/docker/volumes/embed-certs-830393/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-830393",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-830393",
	                "name.minikube.sigs.k8s.io": "embed-certs-830393",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b21883a28947d9087633b4a5f22fa493b7ea5232a0a86083013ca1934d006865",
	            "SandboxKey": "/var/run/docker/netns/b21883a28947",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-830393": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:0e:63:56:fc:44",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1800026322b057a83604241d8aa91bc0c8c07713c3ce5f5e76ba25af81a1e332",
	                    "EndpointID": "e8c77f7e94e1890072511940eb226d3072f984c51874e5913a0022c93a9d2cfa",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-830393",
	                        "db0504489522"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-830393 -n embed-certs-830393
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-830393 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-830393 logs -n 25: (1.361133224s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p force-systemd-flag-203169                                                                                                                                                                                                                  │ force-systemd-flag-203169 │ jenkins │ v1.37.0 │ 06 Oct 25 19:47 UTC │ 06 Oct 25 19:47 UTC │
	│ start   │ -p cert-expiration-585086 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-585086    │ jenkins │ v1.37.0 │ 06 Oct 25 19:47 UTC │ 06 Oct 25 19:48 UTC │
	│ delete  │ -p force-systemd-env-760371                                                                                                                                                                                                                   │ force-systemd-env-760371  │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ start   │ -p cert-options-593131 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-593131       │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ ssh     │ cert-options-593131 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-593131       │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ ssh     │ -p cert-options-593131 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-593131       │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ delete  │ -p cert-options-593131                                                                                                                                                                                                                        │ cert-options-593131       │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ start   │ -p old-k8s-version-100545 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:50 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-100545 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │                     │
	│ stop    │ -p old-k8s-version-100545 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │ 06 Oct 25 19:51 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-100545 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │ 06 Oct 25 19:51 UTC │
	│ start   │ -p old-k8s-version-100545 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │ 06 Oct 25 19:52 UTC │
	│ start   │ -p cert-expiration-585086 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-585086    │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │ 06 Oct 25 19:53 UTC │
	│ image   │ old-k8s-version-100545 image list --format=json                                                                                                                                                                                               │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:52 UTC │
	│ pause   │ -p old-k8s-version-100545 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │                     │
	│ delete  │ -p old-k8s-version-100545                                                                                                                                                                                                                     │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:52 UTC │
	│ delete  │ -p old-k8s-version-100545                                                                                                                                                                                                                     │ old-k8s-version-100545    │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:52 UTC │
	│ start   │ -p no-preload-314275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-314275         │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:53 UTC │
	│ delete  │ -p cert-expiration-585086                                                                                                                                                                                                                     │ cert-expiration-585086    │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │ 06 Oct 25 19:53 UTC │
	│ start   │ -p embed-certs-830393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-830393        │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │ 06 Oct 25 19:54 UTC │
	│ addons  │ enable metrics-server -p no-preload-314275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-314275         │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │                     │
	│ stop    │ -p no-preload-314275 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-314275         │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │ 06 Oct 25 19:54 UTC │
	│ addons  │ enable dashboard -p no-preload-314275 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-314275         │ jenkins │ v1.37.0 │ 06 Oct 25 19:54 UTC │ 06 Oct 25 19:54 UTC │
	│ start   │ -p no-preload-314275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-314275         │ jenkins │ v1.37.0 │ 06 Oct 25 19:54 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-830393 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-830393        │ jenkins │ v1.37.0 │ 06 Oct 25 19:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 19:54:10
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 19:54:10.853088  199488 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:54:10.853280  199488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:54:10.853307  199488 out.go:374] Setting ErrFile to fd 2...
	I1006 19:54:10.853325  199488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:54:10.853592  199488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:54:10.853978  199488 out.go:368] Setting JSON to false
	I1006 19:54:10.854950  199488 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5786,"bootTime":1759774665,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1006 19:54:10.855039  199488 start.go:140] virtualization:  
	I1006 19:54:10.858038  199488 out.go:179] * [no-preload-314275] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 19:54:10.861785  199488 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 19:54:10.861920  199488 notify.go:220] Checking for updates...
	I1006 19:54:10.867641  199488 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 19:54:10.870790  199488 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:54:10.873665  199488 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	I1006 19:54:10.876558  199488 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 19:54:10.879624  199488 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 19:54:10.882982  199488 config.go:182] Loaded profile config "no-preload-314275": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:54:10.883607  199488 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 19:54:10.908622  199488 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 19:54:10.908755  199488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:54:10.988429  199488 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:54:10.979084363 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:54:10.988540  199488 docker.go:318] overlay module found
	I1006 19:54:10.991629  199488 out.go:179] * Using the docker driver based on existing profile
	I1006 19:54:10.994509  199488 start.go:304] selected driver: docker
	I1006 19:54:10.994531  199488 start.go:924] validating driver "docker" against &{Name:no-preload-314275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-314275 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:54:10.994629  199488 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 19:54:10.995352  199488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:54:11.056321  199488 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:54:11.046480823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:54:11.056664  199488 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 19:54:11.056699  199488 cni.go:84] Creating CNI manager for ""
	I1006 19:54:11.056760  199488 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:54:11.056806  199488 start.go:348] cluster config:
	{Name:no-preload-314275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-314275 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:54:11.061718  199488 out.go:179] * Starting "no-preload-314275" primary control-plane node in "no-preload-314275" cluster
	I1006 19:54:11.064604  199488 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 19:54:11.067763  199488 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 19:54:11.070852  199488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:54:11.070936  199488 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 19:54:11.071006  199488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/no-preload-314275/config.json ...
	I1006 19:54:11.071409  199488 cache.go:107] acquiring lock: {Name:mk82d1f3001bf48ab72c24cf6b752688e2e50aee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 19:54:11.071499  199488 cache.go:115] /home/jenkins/minikube-integration/21701-2540/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1006 19:54:11.071511  199488 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21701-2540/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 120.625µs
	I1006 19:54:11.071524  199488 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21701-2540/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1006 19:54:11.071539  199488 cache.go:107] acquiring lock: {Name:mk326376d807a500663ac186cb38c6f2871100f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 19:54:11.071570  199488 cache.go:115] /home/jenkins/minikube-integration/21701-2540/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1006 19:54:11.071575  199488 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21701-2540/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 37.851µs
	I1006 19:54:11.071582  199488 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21701-2540/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1006 19:54:11.071591  199488 cache.go:107] acquiring lock: {Name:mk28768f0f00bc0bc34b1c8be86c5bd00b3e2e62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 19:54:11.071617  199488 cache.go:115] /home/jenkins/minikube-integration/21701-2540/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1006 19:54:11.071622  199488 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21701-2540/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 31.82µs
	I1006 19:54:11.071628  199488 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21701-2540/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1006 19:54:11.071637  199488 cache.go:107] acquiring lock: {Name:mkea60ee84f0f01244ebdfb4e58003ca6563e47c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 19:54:11.071581  199488 cache.go:107] acquiring lock: {Name:mk7f3726b40c089082e87769ea7f2770460838a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 19:54:11.071664  199488 cache.go:115] /home/jenkins/minikube-integration/21701-2540/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1006 19:54:11.071670  199488 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21701-2540/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 34.396µs
	I1006 19:54:11.071675  199488 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21701-2540/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1006 19:54:11.071684  199488 cache.go:107] acquiring lock: {Name:mkac00106c8d32a6e98eeb827718e7996a3fba6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 19:54:11.071763  199488 cache.go:115] /home/jenkins/minikube-integration/21701-2540/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1006 19:54:11.071772  199488 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21701-2540/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 200.004µs
	I1006 19:54:11.071780  199488 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21701-2540/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1006 19:54:11.071789  199488 cache.go:115] /home/jenkins/minikube-integration/21701-2540/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1006 19:54:11.071796  199488 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21701-2540/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 112.732µs
	I1006 19:54:11.071794  199488 cache.go:107] acquiring lock: {Name:mkb660058c43ee68336f11749264f8c386e0b997 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 19:54:11.071802  199488 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21701-2540/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1006 19:54:11.071826  199488 cache.go:115] /home/jenkins/minikube-integration/21701-2540/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1006 19:54:11.071832  199488 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21701-2540/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 40.673µs
	I1006 19:54:11.071830  199488 cache.go:107] acquiring lock: {Name:mkfa95dad82e148f7d00bce3fc75fde0f4ba354e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 19:54:11.071838  199488 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21701-2540/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1006 19:54:11.071892  199488 cache.go:115] /home/jenkins/minikube-integration/21701-2540/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1006 19:54:11.071907  199488 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21701-2540/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 78.18µs
	I1006 19:54:11.071914  199488 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21701-2540/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1006 19:54:11.071943  199488 cache.go:87] Successfully saved all images to host disk.
	I1006 19:54:11.093417  199488 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 19:54:11.093438  199488 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 19:54:11.093451  199488 cache.go:232] Successfully downloaded all kic artifacts
	I1006 19:54:11.093474  199488 start.go:360] acquireMachinesLock for no-preload-314275: {Name:mk5f66305ecdb561020a810564e6cdbd3247a551 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 19:54:11.093545  199488 start.go:364] duration metric: took 55.443µs to acquireMachinesLock for "no-preload-314275"
	I1006 19:54:11.093584  199488 start.go:96] Skipping create...Using existing machine configuration
	I1006 19:54:11.093597  199488 fix.go:54] fixHost starting: 
	I1006 19:54:11.093860  199488 cli_runner.go:164] Run: docker container inspect no-preload-314275 --format={{.State.Status}}
	I1006 19:54:11.112864  199488 fix.go:112] recreateIfNeeded on no-preload-314275: state=Stopped err=<nil>
	W1006 19:54:11.112899  199488 fix.go:138] unexpected machine state, will restart: <nil>
	W1006 19:54:09.926309  195795 node_ready.go:57] node "embed-certs-830393" has "Ready":"False" status (will retry)
	W1006 19:54:11.926773  195795 node_ready.go:57] node "embed-certs-830393" has "Ready":"False" status (will retry)
	W1006 19:54:14.426024  195795 node_ready.go:57] node "embed-certs-830393" has "Ready":"False" status (will retry)
	I1006 19:54:11.118092  199488 out.go:252] * Restarting existing docker container for "no-preload-314275" ...
	I1006 19:54:11.118204  199488 cli_runner.go:164] Run: docker start no-preload-314275
	I1006 19:54:11.362188  199488 cli_runner.go:164] Run: docker container inspect no-preload-314275 --format={{.State.Status}}
	I1006 19:54:11.382584  199488 kic.go:430] container "no-preload-314275" state is running.
	I1006 19:54:11.383078  199488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-314275
	I1006 19:54:11.412946  199488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/no-preload-314275/config.json ...
	I1006 19:54:11.413183  199488 machine.go:93] provisionDockerMachine start ...
	I1006 19:54:11.413245  199488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-314275
	I1006 19:54:11.438016  199488 main.go:141] libmachine: Using SSH client type: native
	I1006 19:54:11.438331  199488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1006 19:54:11.438340  199488 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 19:54:11.439234  199488 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1006 19:54:14.579467  199488 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-314275
	
	I1006 19:54:14.579494  199488 ubuntu.go:182] provisioning hostname "no-preload-314275"
	I1006 19:54:14.579562  199488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-314275
	I1006 19:54:14.597061  199488 main.go:141] libmachine: Using SSH client type: native
	I1006 19:54:14.597372  199488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1006 19:54:14.597388  199488 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-314275 && echo "no-preload-314275" | sudo tee /etc/hostname
	I1006 19:54:14.741349  199488 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-314275
	
	I1006 19:54:14.741541  199488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-314275
	I1006 19:54:14.759382  199488 main.go:141] libmachine: Using SSH client type: native
	I1006 19:54:14.759739  199488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1006 19:54:14.759779  199488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-314275' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-314275/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-314275' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 19:54:14.891987  199488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 19:54:14.892012  199488 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-2540/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-2540/.minikube}
	I1006 19:54:14.892088  199488 ubuntu.go:190] setting up certificates
	I1006 19:54:14.892099  199488 provision.go:84] configureAuth start
	I1006 19:54:14.892172  199488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-314275
	I1006 19:54:14.910085  199488 provision.go:143] copyHostCerts
	I1006 19:54:14.910164  199488 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem, removing ...
	I1006 19:54:14.910187  199488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem
	I1006 19:54:14.910279  199488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem (1082 bytes)
	I1006 19:54:14.910384  199488 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem, removing ...
	I1006 19:54:14.910395  199488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem
	I1006 19:54:14.910422  199488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem (1123 bytes)
	I1006 19:54:14.910487  199488 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem, removing ...
	I1006 19:54:14.910496  199488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem
	I1006 19:54:14.910520  199488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem (1675 bytes)
	I1006 19:54:14.910574  199488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem org=jenkins.no-preload-314275 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-314275]
	I1006 19:54:15.147861  199488 provision.go:177] copyRemoteCerts
	I1006 19:54:15.147928  199488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 19:54:15.147976  199488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-314275
	I1006 19:54:15.165015  199488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/no-preload-314275/id_rsa Username:docker}
	I1006 19:54:15.259525  199488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 19:54:15.278377  199488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1006 19:54:15.297263  199488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 19:54:15.315840  199488 provision.go:87] duration metric: took 423.726842ms to configureAuth
	I1006 19:54:15.315909  199488 ubuntu.go:206] setting minikube options for container-runtime
	I1006 19:54:15.316122  199488 config.go:182] Loaded profile config "no-preload-314275": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:54:15.316227  199488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-314275
	I1006 19:54:15.334557  199488 main.go:141] libmachine: Using SSH client type: native
	I1006 19:54:15.334879  199488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1006 19:54:15.334899  199488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 19:54:15.656432  199488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 19:54:15.656462  199488 machine.go:96] duration metric: took 4.243269584s to provisionDockerMachine
	I1006 19:54:15.656473  199488 start.go:293] postStartSetup for "no-preload-314275" (driver="docker")
	I1006 19:54:15.656485  199488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 19:54:15.656549  199488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 19:54:15.656596  199488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-314275
	I1006 19:54:15.680157  199488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/no-preload-314275/id_rsa Username:docker}
	I1006 19:54:15.779651  199488 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 19:54:15.783254  199488 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 19:54:15.783295  199488 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 19:54:15.783326  199488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/addons for local assets ...
	I1006 19:54:15.783386  199488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/files for local assets ...
	I1006 19:54:15.783476  199488 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> 43502.pem in /etc/ssl/certs
	I1006 19:54:15.783590  199488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 19:54:15.791622  199488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:54:15.809797  199488 start.go:296] duration metric: took 153.30707ms for postStartSetup
	I1006 19:54:15.809963  199488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 19:54:15.810042  199488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-314275
	I1006 19:54:15.827640  199488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/no-preload-314275/id_rsa Username:docker}
	I1006 19:54:15.920778  199488 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 19:54:15.927821  199488 fix.go:56] duration metric: took 4.834224146s for fixHost
	I1006 19:54:15.927849  199488 start.go:83] releasing machines lock for "no-preload-314275", held for 4.834295474s
	I1006 19:54:15.927924  199488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-314275
	I1006 19:54:15.945476  199488 ssh_runner.go:195] Run: cat /version.json
	I1006 19:54:15.945530  199488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-314275
	I1006 19:54:15.945539  199488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 19:54:15.945591  199488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-314275
	I1006 19:54:15.967166  199488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/no-preload-314275/id_rsa Username:docker}
	I1006 19:54:15.973568  199488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/no-preload-314275/id_rsa Username:docker}
	I1006 19:54:16.067485  199488 ssh_runner.go:195] Run: systemctl --version
	I1006 19:54:16.158498  199488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 19:54:16.197756  199488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 19:54:16.202185  199488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 19:54:16.202308  199488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 19:54:16.210427  199488 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 19:54:16.210454  199488 start.go:495] detecting cgroup driver to use...
	I1006 19:54:16.210508  199488 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 19:54:16.210579  199488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 19:54:16.226440  199488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 19:54:16.240144  199488 docker.go:218] disabling cri-docker service (if available) ...
	I1006 19:54:16.240255  199488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 19:54:16.256471  199488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 19:54:16.269876  199488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 19:54:16.392204  199488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 19:54:16.513083  199488 docker.go:234] disabling docker service ...
	I1006 19:54:16.513198  199488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 19:54:16.530651  199488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 19:54:16.543963  199488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 19:54:16.662451  199488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 19:54:16.787152  199488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 19:54:16.801096  199488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 19:54:16.814953  199488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 19:54:16.815027  199488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:54:16.823853  199488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 19:54:16.823982  199488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:54:16.834642  199488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:54:16.843496  199488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:54:16.852391  199488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 19:54:16.860973  199488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:54:16.869935  199488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:54:16.878567  199488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:54:16.887761  199488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 19:54:16.895461  199488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 19:54:16.902919  199488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:54:17.026095  199488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 19:54:17.166814  199488 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 19:54:17.166898  199488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 19:54:17.170940  199488 start.go:563] Will wait 60s for crictl version
	I1006 19:54:17.171025  199488 ssh_runner.go:195] Run: which crictl
	I1006 19:54:17.174670  199488 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 19:54:17.201373  199488 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 19:54:17.201471  199488 ssh_runner.go:195] Run: crio --version
	I1006 19:54:17.233208  199488 ssh_runner.go:195] Run: crio --version
	I1006 19:54:17.264488  199488 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 19:54:17.267411  199488 cli_runner.go:164] Run: docker network inspect no-preload-314275 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:54:17.283490  199488 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1006 19:54:17.287414  199488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:54:17.298283  199488 kubeadm.go:883] updating cluster {Name:no-preload-314275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-314275 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 19:54:17.298396  199488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:54:17.298440  199488 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:54:17.330104  199488 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:54:17.330130  199488 cache_images.go:85] Images are preloaded, skipping loading
	I1006 19:54:17.330139  199488 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1006 19:54:17.330236  199488 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-314275 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-314275 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 19:54:17.330312  199488 ssh_runner.go:195] Run: crio config
	I1006 19:54:17.398223  199488 cni.go:84] Creating CNI manager for ""
	I1006 19:54:17.398258  199488 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:54:17.398274  199488 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 19:54:17.398297  199488 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-314275 NodeName:no-preload-314275 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 19:54:17.398441  199488 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-314275"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 19:54:17.398526  199488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 19:54:17.406065  199488 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 19:54:17.406186  199488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 19:54:17.413747  199488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1006 19:54:17.427813  199488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 19:54:17.441034  199488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1006 19:54:17.453978  199488 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1006 19:54:17.457912  199488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:54:17.468093  199488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:54:17.594599  199488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:54:17.610653  199488 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/no-preload-314275 for IP: 192.168.76.2
	I1006 19:54:17.610727  199488 certs.go:195] generating shared ca certs ...
	I1006 19:54:17.610758  199488 certs.go:227] acquiring lock for ca certs: {Name:mke29bef1b13829052576090d66d8864f7cbc64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:54:17.610940  199488 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key
	I1006 19:54:17.611013  199488 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key
	I1006 19:54:17.611043  199488 certs.go:257] generating profile certs ...
	I1006 19:54:17.611153  199488 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/no-preload-314275/client.key
	I1006 19:54:17.611250  199488 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/no-preload-314275/apiserver.key.d578ccd0
	I1006 19:54:17.611335  199488 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/no-preload-314275/proxy-client.key
	I1006 19:54:17.611475  199488 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem (1338 bytes)
	W1006 19:54:17.611537  199488 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350_empty.pem, impossibly tiny 0 bytes
	I1006 19:54:17.611562  199488 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 19:54:17.611605  199488 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem (1082 bytes)
	I1006 19:54:17.611653  199488 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem (1123 bytes)
	I1006 19:54:17.611694  199488 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem (1675 bytes)
	I1006 19:54:17.611823  199488 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:54:17.612437  199488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 19:54:17.634820  199488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 19:54:17.655798  199488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 19:54:17.677179  199488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 19:54:17.702003  199488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/no-preload-314275/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 19:54:17.724187  199488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/no-preload-314275/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1006 19:54:17.756936  199488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/no-preload-314275/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 19:54:17.782957  199488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/no-preload-314275/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 19:54:17.815251  199488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /usr/share/ca-certificates/43502.pem (1708 bytes)
	I1006 19:54:17.841240  199488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 19:54:17.863793  199488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem --> /usr/share/ca-certificates/4350.pem (1338 bytes)
	I1006 19:54:17.884638  199488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 19:54:17.899682  199488 ssh_runner.go:195] Run: openssl version
	I1006 19:54:17.906937  199488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 19:54:17.916531  199488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:54:17.920706  199488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:54:17.920776  199488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:54:17.965491  199488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 19:54:17.973616  199488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4350.pem && ln -fs /usr/share/ca-certificates/4350.pem /etc/ssl/certs/4350.pem"
	I1006 19:54:17.982349  199488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4350.pem
	I1006 19:54:17.986251  199488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 18:49 /usr/share/ca-certificates/4350.pem
	I1006 19:54:17.986343  199488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4350.pem
	I1006 19:54:18.030674  199488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4350.pem /etc/ssl/certs/51391683.0"
	I1006 19:54:18.039975  199488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43502.pem && ln -fs /usr/share/ca-certificates/43502.pem /etc/ssl/certs/43502.pem"
	I1006 19:54:18.049225  199488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43502.pem
	I1006 19:54:18.054316  199488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 18:49 /usr/share/ca-certificates/43502.pem
	I1006 19:54:18.054386  199488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43502.pem
	I1006 19:54:18.096745  199488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43502.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 19:54:18.104999  199488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 19:54:18.108760  199488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 19:54:18.150980  199488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 19:54:18.192604  199488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 19:54:18.233256  199488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 19:54:18.296948  199488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 19:54:18.357969  199488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 19:54:18.452169  199488 kubeadm.go:400] StartCluster: {Name:no-preload-314275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-314275 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:54:18.452307  199488 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 19:54:18.452420  199488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 19:54:18.521833  199488 cri.go:89] found id: "e3421553bee92bb445fc71dc28a9e73f44a2e2fdf3f7ebc32bcf8e628b6f1a02"
	I1006 19:54:18.521903  199488 cri.go:89] found id: "730df4b08913c9671db866cfd407ba257e98404db9dacaf36ebb44dfbc61ab38"
	I1006 19:54:18.521920  199488 cri.go:89] found id: "186d0a80c923460b51dd7d0acf3cdfa8b7d291a0ad41f9d28777566a35ac63c5"
	I1006 19:54:18.521938  199488 cri.go:89] found id: "ffc6fa832b5c0989f18a443f577b66e36306a055f9f4dda54f1184a47f1c75de"
	I1006 19:54:18.521953  199488 cri.go:89] found id: ""
	I1006 19:54:18.522039  199488 ssh_runner.go:195] Run: sudo runc list -f json
	W1006 19:54:18.544062  199488 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:54:18Z" level=error msg="open /run/runc: no such file or directory"
	I1006 19:54:18.544228  199488 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 19:54:18.561088  199488 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 19:54:18.561174  199488 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 19:54:18.561277  199488 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 19:54:18.584784  199488 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 19:54:18.585869  199488 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-314275" does not appear in /home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:54:18.586568  199488 kubeconfig.go:62] /home/jenkins/minikube-integration/21701-2540/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-314275" cluster setting kubeconfig missing "no-preload-314275" context setting]
	I1006 19:54:18.587603  199488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/kubeconfig: {Name:mkf8cce8f9dace5c4d41967d6ad8df5e03ba53f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:54:18.589666  199488 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 19:54:18.601360  199488 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1006 19:54:18.601449  199488 kubeadm.go:601] duration metric: took 40.246527ms to restartPrimaryControlPlane
	I1006 19:54:18.601474  199488 kubeadm.go:402] duration metric: took 149.324373ms to StartCluster
	I1006 19:54:18.601517  199488 settings.go:142] acquiring lock: {Name:mkaade96c66593c653256adaeb0a3029ca60e0c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:54:18.601619  199488 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:54:18.603544  199488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/kubeconfig: {Name:mkf8cce8f9dace5c4d41967d6ad8df5e03ba53f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:54:18.603990  199488 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 19:54:18.604427  199488 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 19:54:18.604503  199488 addons.go:69] Setting storage-provisioner=true in profile "no-preload-314275"
	I1006 19:54:18.604521  199488 addons.go:238] Setting addon storage-provisioner=true in "no-preload-314275"
	W1006 19:54:18.604528  199488 addons.go:247] addon storage-provisioner should already be in state true
	I1006 19:54:18.604549  199488 host.go:66] Checking if "no-preload-314275" exists ...
	I1006 19:54:18.605084  199488 cli_runner.go:164] Run: docker container inspect no-preload-314275 --format={{.State.Status}}
	I1006 19:54:18.605388  199488 config.go:182] Loaded profile config "no-preload-314275": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:54:18.605479  199488 addons.go:69] Setting dashboard=true in profile "no-preload-314275"
	I1006 19:54:18.605528  199488 addons.go:238] Setting addon dashboard=true in "no-preload-314275"
	W1006 19:54:18.605551  199488 addons.go:247] addon dashboard should already be in state true
	I1006 19:54:18.605583  199488 host.go:66] Checking if "no-preload-314275" exists ...
	I1006 19:54:18.606080  199488 cli_runner.go:164] Run: docker container inspect no-preload-314275 --format={{.State.Status}}
	I1006 19:54:18.606495  199488 addons.go:69] Setting default-storageclass=true in profile "no-preload-314275"
	I1006 19:54:18.606520  199488 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-314275"
	I1006 19:54:18.606793  199488 cli_runner.go:164] Run: docker container inspect no-preload-314275 --format={{.State.Status}}
	I1006 19:54:18.611867  199488 out.go:179] * Verifying Kubernetes components...
	I1006 19:54:18.619917  199488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:54:18.652861  199488 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 19:54:18.656067  199488 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 19:54:18.656097  199488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 19:54:18.656162  199488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-314275
	I1006 19:54:18.667942  199488 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1006 19:54:18.675878  199488 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1006 19:54:16.926505  195795 node_ready.go:57] node "embed-certs-830393" has "Ready":"False" status (will retry)
	W1006 19:54:18.926628  195795 node_ready.go:57] node "embed-certs-830393" has "Ready":"False" status (will retry)
	I1006 19:54:18.677187  199488 addons.go:238] Setting addon default-storageclass=true in "no-preload-314275"
	W1006 19:54:18.677202  199488 addons.go:247] addon default-storageclass should already be in state true
	I1006 19:54:18.677226  199488 host.go:66] Checking if "no-preload-314275" exists ...
	I1006 19:54:18.677640  199488 cli_runner.go:164] Run: docker container inspect no-preload-314275 --format={{.State.Status}}
	I1006 19:54:18.681552  199488 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1006 19:54:18.681576  199488 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1006 19:54:18.681727  199488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-314275
	I1006 19:54:18.714949  199488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/no-preload-314275/id_rsa Username:docker}
	I1006 19:54:18.727839  199488 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 19:54:18.727863  199488 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 19:54:18.727921  199488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-314275
	I1006 19:54:18.754843  199488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/no-preload-314275/id_rsa Username:docker}
	I1006 19:54:18.765420  199488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/no-preload-314275/id_rsa Username:docker}
	I1006 19:54:19.002722  199488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:54:19.024926  199488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 19:54:19.121624  199488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 19:54:19.121874  199488 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1006 19:54:19.121889  199488 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1006 19:54:19.135386  199488 node_ready.go:35] waiting up to 6m0s for node "no-preload-314275" to be "Ready" ...
	I1006 19:54:19.192506  199488 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1006 19:54:19.192587  199488 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1006 19:54:19.267517  199488 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1006 19:54:19.267544  199488 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1006 19:54:19.294503  199488 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1006 19:54:19.294530  199488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1006 19:54:19.325675  199488 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1006 19:54:19.325699  199488 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1006 19:54:19.346029  199488 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1006 19:54:19.346054  199488 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1006 19:54:19.372345  199488 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1006 19:54:19.372370  199488 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1006 19:54:19.395627  199488 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1006 19:54:19.395654  199488 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1006 19:54:19.410398  199488 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1006 19:54:19.410424  199488 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1006 19:54:19.428065  199488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1006 19:54:23.002097  199488 node_ready.go:49] node "no-preload-314275" is "Ready"
	I1006 19:54:23.002126  199488 node_ready.go:38] duration metric: took 3.866702427s for node "no-preload-314275" to be "Ready" ...
	I1006 19:54:23.002144  199488 api_server.go:52] waiting for apiserver process to appear ...
	I1006 19:54:23.002200  199488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 19:54:24.409934  199488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.384923683s)
	I1006 19:54:24.410003  199488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.288355628s)
	I1006 19:54:24.475987  199488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.047877648s)
	I1006 19:54:24.476207  199488 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.473994837s)
	I1006 19:54:24.476228  199488 api_server.go:72] duration metric: took 5.872169292s to wait for apiserver process to appear ...
	I1006 19:54:24.476235  199488 api_server.go:88] waiting for apiserver healthz status ...
	I1006 19:54:24.476263  199488 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1006 19:54:24.479380  199488 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-314275 addons enable metrics-server
	
	I1006 19:54:24.482409  199488 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1006 19:54:20.926971  195795 node_ready.go:57] node "embed-certs-830393" has "Ready":"False" status (will retry)
	W1006 19:54:23.426449  195795 node_ready.go:57] node "embed-certs-830393" has "Ready":"False" status (will retry)
	I1006 19:54:24.485322  199488 addons.go:514] duration metric: took 5.880894779s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1006 19:54:24.493887  199488 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1006 19:54:24.493934  199488 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1006 19:54:24.976516  199488 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1006 19:54:24.991761  199488 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1006 19:54:24.993552  199488 api_server.go:141] control plane version: v1.34.1
	I1006 19:54:24.993580  199488 api_server.go:131] duration metric: took 517.338979ms to wait for apiserver health ...
	I1006 19:54:24.993589  199488 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 19:54:24.998386  199488 system_pods.go:59] 8 kube-system pods found
	I1006 19:54:24.998428  199488 system_pods.go:61] "coredns-66bc5c9577-tccns" [3bc39edb-cfcd-483e-9107-1e53757c329d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:54:24.998468  199488 system_pods.go:61] "etcd-no-preload-314275" [1bc41062-9b68-4fbf-b275-37eadcd1bbaf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 19:54:24.998482  199488 system_pods.go:61] "kindnet-b6hb7" [bcdd65eb-12b8-4b04-be54-5dd536ce6b7a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1006 19:54:24.998489  199488 system_pods.go:61] "kube-apiserver-no-preload-314275" [126f4e3a-6a02-45ed-aa95-99483e2a91c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 19:54:24.998496  199488 system_pods.go:61] "kube-controller-manager-no-preload-314275" [1507f50d-94ad-4592-877a-826f1a6f2033] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 19:54:24.998505  199488 system_pods.go:61] "kube-proxy-nr6pc" [14e2cef3-72bf-4202-b91a-9b248e7b93ec] Running
	I1006 19:54:24.998512  199488 system_pods.go:61] "kube-scheduler-no-preload-314275" [ed7b7492-9ded-48c2-957f-c46ce40d1e14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 19:54:24.998519  199488 system_pods.go:61] "storage-provisioner" [00c9a83a-e9f1-4db6-8b19-734ade7dec64] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 19:54:24.998543  199488 system_pods.go:74] duration metric: took 4.947132ms to wait for pod list to return data ...
	I1006 19:54:24.998566  199488 default_sa.go:34] waiting for default service account to be created ...
	I1006 19:54:25.005563  199488 default_sa.go:45] found service account: "default"
	I1006 19:54:25.005590  199488 default_sa.go:55] duration metric: took 7.016575ms for default service account to be created ...
	I1006 19:54:25.005600  199488 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 19:54:25.008739  199488 system_pods.go:86] 8 kube-system pods found
	I1006 19:54:25.008777  199488 system_pods.go:89] "coredns-66bc5c9577-tccns" [3bc39edb-cfcd-483e-9107-1e53757c329d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:54:25.008786  199488 system_pods.go:89] "etcd-no-preload-314275" [1bc41062-9b68-4fbf-b275-37eadcd1bbaf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 19:54:25.008794  199488 system_pods.go:89] "kindnet-b6hb7" [bcdd65eb-12b8-4b04-be54-5dd536ce6b7a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1006 19:54:25.008834  199488 system_pods.go:89] "kube-apiserver-no-preload-314275" [126f4e3a-6a02-45ed-aa95-99483e2a91c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 19:54:25.008849  199488 system_pods.go:89] "kube-controller-manager-no-preload-314275" [1507f50d-94ad-4592-877a-826f1a6f2033] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 19:54:25.008854  199488 system_pods.go:89] "kube-proxy-nr6pc" [14e2cef3-72bf-4202-b91a-9b248e7b93ec] Running
	I1006 19:54:25.008867  199488 system_pods.go:89] "kube-scheduler-no-preload-314275" [ed7b7492-9ded-48c2-957f-c46ce40d1e14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 19:54:25.008873  199488 system_pods.go:89] "storage-provisioner" [00c9a83a-e9f1-4db6-8b19-734ade7dec64] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 19:54:25.008898  199488 system_pods.go:126] duration metric: took 3.27154ms to wait for k8s-apps to be running ...
	I1006 19:54:25.008923  199488 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 19:54:25.008990  199488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:54:25.039312  199488 system_svc.go:56] duration metric: took 30.381263ms WaitForService to wait for kubelet
	I1006 19:54:25.039344  199488 kubeadm.go:586] duration metric: took 6.435282521s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 19:54:25.039376  199488 node_conditions.go:102] verifying NodePressure condition ...
	I1006 19:54:25.058257  199488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 19:54:25.058292  199488 node_conditions.go:123] node cpu capacity is 2
	I1006 19:54:25.058306  199488 node_conditions.go:105] duration metric: took 18.923034ms to run NodePressure ...
	I1006 19:54:25.058344  199488 start.go:241] waiting for startup goroutines ...
	I1006 19:54:25.058359  199488 start.go:246] waiting for cluster config update ...
	I1006 19:54:25.058371  199488 start.go:255] writing updated cluster config ...
	I1006 19:54:25.058675  199488 ssh_runner.go:195] Run: rm -f paused
	I1006 19:54:25.063665  199488 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 19:54:25.072553  199488 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tccns" in "kube-system" namespace to be "Ready" or be gone ...
	W1006 19:54:25.432713  195795 node_ready.go:57] node "embed-certs-830393" has "Ready":"False" status (will retry)
	W1006 19:54:27.926344  195795 node_ready.go:57] node "embed-certs-830393" has "Ready":"False" status (will retry)
	W1006 19:54:27.079830  199488 pod_ready.go:104] pod "coredns-66bc5c9577-tccns" is not "Ready", error: <nil>
	W1006 19:54:29.580628  199488 pod_ready.go:104] pod "coredns-66bc5c9577-tccns" is not "Ready", error: <nil>
	W1006 19:54:29.926886  195795 node_ready.go:57] node "embed-certs-830393" has "Ready":"False" status (will retry)
	W1006 19:54:32.426646  195795 node_ready.go:57] node "embed-certs-830393" has "Ready":"False" status (will retry)
	W1006 19:54:34.427256  195795 node_ready.go:57] node "embed-certs-830393" has "Ready":"False" status (will retry)
	W1006 19:54:31.581333  199488 pod_ready.go:104] pod "coredns-66bc5c9577-tccns" is not "Ready", error: <nil>
	W1006 19:54:34.078951  199488 pod_ready.go:104] pod "coredns-66bc5c9577-tccns" is not "Ready", error: <nil>
	W1006 19:54:36.927043  195795 node_ready.go:57] node "embed-certs-830393" has "Ready":"False" status (will retry)
	W1006 19:54:39.425945  195795 node_ready.go:57] node "embed-certs-830393" has "Ready":"False" status (will retry)
	W1006 19:54:36.079953  199488 pod_ready.go:104] pod "coredns-66bc5c9577-tccns" is not "Ready", error: <nil>
	W1006 19:54:38.588258  199488 pod_ready.go:104] pod "coredns-66bc5c9577-tccns" is not "Ready", error: <nil>
	W1006 19:54:41.426746  195795 node_ready.go:57] node "embed-certs-830393" has "Ready":"False" status (will retry)
	W1006 19:54:43.926448  195795 node_ready.go:57] node "embed-certs-830393" has "Ready":"False" status (will retry)
	W1006 19:54:41.077776  199488 pod_ready.go:104] pod "coredns-66bc5c9577-tccns" is not "Ready", error: <nil>
	W1006 19:54:43.078093  199488 pod_ready.go:104] pod "coredns-66bc5c9577-tccns" is not "Ready", error: <nil>
	W1006 19:54:45.110288  199488 pod_ready.go:104] pod "coredns-66bc5c9577-tccns" is not "Ready", error: <nil>
	I1006 19:54:45.431086  195795 node_ready.go:49] node "embed-certs-830393" is "Ready"
	I1006 19:54:45.431121  195795 node_ready.go:38] duration metric: took 40.007988873s for node "embed-certs-830393" to be "Ready" ...
	I1006 19:54:45.431139  195795 api_server.go:52] waiting for apiserver process to appear ...
	I1006 19:54:45.431212  195795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 19:54:45.453413  195795 api_server.go:72] duration metric: took 41.383928543s to wait for apiserver process to appear ...
	I1006 19:54:45.453440  195795 api_server.go:88] waiting for apiserver healthz status ...
	I1006 19:54:45.453460  195795 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1006 19:54:45.467565  195795 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1006 19:54:45.471867  195795 api_server.go:141] control plane version: v1.34.1
	I1006 19:54:45.471895  195795 api_server.go:131] duration metric: took 18.447934ms to wait for apiserver health ...
	I1006 19:54:45.471905  195795 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 19:54:45.486241  195795 system_pods.go:59] 8 kube-system pods found
	I1006 19:54:45.486320  195795 system_pods.go:61] "coredns-66bc5c9577-8k4cq" [e6b9c4d9-313c-467e-b448-9867361a42fb] Pending
	I1006 19:54:45.486353  195795 system_pods.go:61] "etcd-embed-certs-830393" [94302cea-26c9-4ff5-bcff-ef3903324bfe] Running
	I1006 19:54:45.486391  195795 system_pods.go:61] "kindnet-g7jnc" [4f869226-920a-4722-aa82-308466e32e59] Running
	I1006 19:54:45.486484  195795 system_pods.go:61] "kube-apiserver-embed-certs-830393" [7c31daf2-0bbe-4d0d-a233-79da10ee31b6] Running
	I1006 19:54:45.486535  195795 system_pods.go:61] "kube-controller-manager-embed-certs-830393" [5cdae168-9643-4ada-ac16-35f8f337d1fe] Running
	I1006 19:54:45.486559  195795 system_pods.go:61] "kube-proxy-xl5tt" [75361417-428d-4ea5-89ad-4570024b8916] Running
	I1006 19:54:45.486610  195795 system_pods.go:61] "kube-scheduler-embed-certs-830393" [be02b2b3-c2c3-458c-8ec2-3d6e2b334427] Running
	I1006 19:54:45.486640  195795 system_pods.go:61] "storage-provisioner" [c6173113-55a1-44a2-b622-34ba6868ea4c] Pending
	I1006 19:54:45.486661  195795 system_pods.go:74] duration metric: took 14.749398ms to wait for pod list to return data ...
	I1006 19:54:45.486682  195795 default_sa.go:34] waiting for default service account to be created ...
	I1006 19:54:45.496449  195795 default_sa.go:45] found service account: "default"
	I1006 19:54:45.496521  195795 default_sa.go:55] duration metric: took 9.818587ms for default service account to be created ...
	I1006 19:54:45.496546  195795 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 19:54:45.503759  195795 system_pods.go:86] 8 kube-system pods found
	I1006 19:54:45.503844  195795 system_pods.go:89] "coredns-66bc5c9577-8k4cq" [e6b9c4d9-313c-467e-b448-9867361a42fb] Pending
	I1006 19:54:45.503866  195795 system_pods.go:89] "etcd-embed-certs-830393" [94302cea-26c9-4ff5-bcff-ef3903324bfe] Running
	I1006 19:54:45.503884  195795 system_pods.go:89] "kindnet-g7jnc" [4f869226-920a-4722-aa82-308466e32e59] Running
	I1006 19:54:45.503917  195795 system_pods.go:89] "kube-apiserver-embed-certs-830393" [7c31daf2-0bbe-4d0d-a233-79da10ee31b6] Running
	I1006 19:54:45.503941  195795 system_pods.go:89] "kube-controller-manager-embed-certs-830393" [5cdae168-9643-4ada-ac16-35f8f337d1fe] Running
	I1006 19:54:45.503960  195795 system_pods.go:89] "kube-proxy-xl5tt" [75361417-428d-4ea5-89ad-4570024b8916] Running
	I1006 19:54:45.503977  195795 system_pods.go:89] "kube-scheduler-embed-certs-830393" [be02b2b3-c2c3-458c-8ec2-3d6e2b334427] Running
	I1006 19:54:45.504018  195795 system_pods.go:89] "storage-provisioner" [c6173113-55a1-44a2-b622-34ba6868ea4c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 19:54:45.504064  195795 retry.go:31] will retry after 276.053036ms: missing components: kube-dns
	I1006 19:54:45.784948  195795 system_pods.go:86] 8 kube-system pods found
	I1006 19:54:45.784987  195795 system_pods.go:89] "coredns-66bc5c9577-8k4cq" [e6b9c4d9-313c-467e-b448-9867361a42fb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:54:45.784995  195795 system_pods.go:89] "etcd-embed-certs-830393" [94302cea-26c9-4ff5-bcff-ef3903324bfe] Running
	I1006 19:54:45.789037  195795 system_pods.go:89] "kindnet-g7jnc" [4f869226-920a-4722-aa82-308466e32e59] Running
	I1006 19:54:45.789075  195795 system_pods.go:89] "kube-apiserver-embed-certs-830393" [7c31daf2-0bbe-4d0d-a233-79da10ee31b6] Running
	I1006 19:54:45.789084  195795 system_pods.go:89] "kube-controller-manager-embed-certs-830393" [5cdae168-9643-4ada-ac16-35f8f337d1fe] Running
	I1006 19:54:45.789089  195795 system_pods.go:89] "kube-proxy-xl5tt" [75361417-428d-4ea5-89ad-4570024b8916] Running
	I1006 19:54:45.789113  195795 system_pods.go:89] "kube-scheduler-embed-certs-830393" [be02b2b3-c2c3-458c-8ec2-3d6e2b334427] Running
	I1006 19:54:45.789139  195795 system_pods.go:89] "storage-provisioner" [c6173113-55a1-44a2-b622-34ba6868ea4c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 19:54:45.789160  195795 retry.go:31] will retry after 243.499445ms: missing components: kube-dns
	I1006 19:54:46.036742  195795 system_pods.go:86] 8 kube-system pods found
	I1006 19:54:46.036782  195795 system_pods.go:89] "coredns-66bc5c9577-8k4cq" [e6b9c4d9-313c-467e-b448-9867361a42fb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:54:46.036790  195795 system_pods.go:89] "etcd-embed-certs-830393" [94302cea-26c9-4ff5-bcff-ef3903324bfe] Running
	I1006 19:54:46.036821  195795 system_pods.go:89] "kindnet-g7jnc" [4f869226-920a-4722-aa82-308466e32e59] Running
	I1006 19:54:46.036835  195795 system_pods.go:89] "kube-apiserver-embed-certs-830393" [7c31daf2-0bbe-4d0d-a233-79da10ee31b6] Running
	I1006 19:54:46.036842  195795 system_pods.go:89] "kube-controller-manager-embed-certs-830393" [5cdae168-9643-4ada-ac16-35f8f337d1fe] Running
	I1006 19:54:46.036849  195795 system_pods.go:89] "kube-proxy-xl5tt" [75361417-428d-4ea5-89ad-4570024b8916] Running
	I1006 19:54:46.036853  195795 system_pods.go:89] "kube-scheduler-embed-certs-830393" [be02b2b3-c2c3-458c-8ec2-3d6e2b334427] Running
	I1006 19:54:46.036862  195795 system_pods.go:89] "storage-provisioner" [c6173113-55a1-44a2-b622-34ba6868ea4c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 19:54:46.036877  195795 retry.go:31] will retry after 303.20066ms: missing components: kube-dns
	I1006 19:54:46.343830  195795 system_pods.go:86] 8 kube-system pods found
	I1006 19:54:46.343860  195795 system_pods.go:89] "coredns-66bc5c9577-8k4cq" [e6b9c4d9-313c-467e-b448-9867361a42fb] Running
	I1006 19:54:46.343867  195795 system_pods.go:89] "etcd-embed-certs-830393" [94302cea-26c9-4ff5-bcff-ef3903324bfe] Running
	I1006 19:54:46.343871  195795 system_pods.go:89] "kindnet-g7jnc" [4f869226-920a-4722-aa82-308466e32e59] Running
	I1006 19:54:46.343875  195795 system_pods.go:89] "kube-apiserver-embed-certs-830393" [7c31daf2-0bbe-4d0d-a233-79da10ee31b6] Running
	I1006 19:54:46.343879  195795 system_pods.go:89] "kube-controller-manager-embed-certs-830393" [5cdae168-9643-4ada-ac16-35f8f337d1fe] Running
	I1006 19:54:46.343906  195795 system_pods.go:89] "kube-proxy-xl5tt" [75361417-428d-4ea5-89ad-4570024b8916] Running
	I1006 19:54:46.343917  195795 system_pods.go:89] "kube-scheduler-embed-certs-830393" [be02b2b3-c2c3-458c-8ec2-3d6e2b334427] Running
	I1006 19:54:46.343922  195795 system_pods.go:89] "storage-provisioner" [c6173113-55a1-44a2-b622-34ba6868ea4c] Running
	I1006 19:54:46.343931  195795 system_pods.go:126] duration metric: took 847.36589ms to wait for k8s-apps to be running ...
	I1006 19:54:46.343941  195795 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 19:54:46.344008  195795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:54:46.356777  195795 system_svc.go:56] duration metric: took 12.826452ms WaitForService to wait for kubelet
	I1006 19:54:46.356860  195795 kubeadm.go:586] duration metric: took 42.28737758s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 19:54:46.356896  195795 node_conditions.go:102] verifying NodePressure condition ...
	I1006 19:54:46.359852  195795 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 19:54:46.359885  195795 node_conditions.go:123] node cpu capacity is 2
	I1006 19:54:46.359899  195795 node_conditions.go:105] duration metric: took 2.971384ms to run NodePressure ...
	I1006 19:54:46.359933  195795 start.go:241] waiting for startup goroutines ...
	I1006 19:54:46.359948  195795 start.go:246] waiting for cluster config update ...
	I1006 19:54:46.359960  195795 start.go:255] writing updated cluster config ...
	I1006 19:54:46.360285  195795 ssh_runner.go:195] Run: rm -f paused
	I1006 19:54:46.364464  195795 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 19:54:46.369102  195795 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8k4cq" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:54:46.374345  195795 pod_ready.go:94] pod "coredns-66bc5c9577-8k4cq" is "Ready"
	I1006 19:54:46.374373  195795 pod_ready.go:86] duration metric: took 5.241365ms for pod "coredns-66bc5c9577-8k4cq" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:54:46.376980  195795 pod_ready.go:83] waiting for pod "etcd-embed-certs-830393" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:54:46.381366  195795 pod_ready.go:94] pod "etcd-embed-certs-830393" is "Ready"
	I1006 19:54:46.381392  195795 pod_ready.go:86] duration metric: took 4.386974ms for pod "etcd-embed-certs-830393" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:54:46.383378  195795 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-830393" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:54:46.388050  195795 pod_ready.go:94] pod "kube-apiserver-embed-certs-830393" is "Ready"
	I1006 19:54:46.388077  195795 pod_ready.go:86] duration metric: took 4.673437ms for pod "kube-apiserver-embed-certs-830393" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:54:46.390695  195795 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-830393" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:54:46.769312  195795 pod_ready.go:94] pod "kube-controller-manager-embed-certs-830393" is "Ready"
	I1006 19:54:46.769339  195795 pod_ready.go:86] duration metric: took 378.610542ms for pod "kube-controller-manager-embed-certs-830393" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:54:46.968335  195795 pod_ready.go:83] waiting for pod "kube-proxy-xl5tt" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:54:47.368300  195795 pod_ready.go:94] pod "kube-proxy-xl5tt" is "Ready"
	I1006 19:54:47.368328  195795 pod_ready.go:86] duration metric: took 399.966641ms for pod "kube-proxy-xl5tt" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:54:47.580620  195795 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-830393" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:54:47.969438  195795 pod_ready.go:94] pod "kube-scheduler-embed-certs-830393" is "Ready"
	I1006 19:54:47.969468  195795 pod_ready.go:86] duration metric: took 388.820901ms for pod "kube-scheduler-embed-certs-830393" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:54:47.969484  195795 pod_ready.go:40] duration metric: took 1.604981443s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 19:54:48.031499  195795 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1006 19:54:48.035246  195795 out.go:179] * Done! kubectl is now configured to use "embed-certs-830393" cluster and "default" namespace by default
	W1006 19:54:47.579807  199488 pod_ready.go:104] pod "coredns-66bc5c9577-tccns" is not "Ready", error: <nil>
	W1006 19:54:50.078391  199488 pod_ready.go:104] pod "coredns-66bc5c9577-tccns" is not "Ready", error: <nil>
	W1006 19:54:52.080100  199488 pod_ready.go:104] pod "coredns-66bc5c9577-tccns" is not "Ready", error: <nil>
	W1006 19:54:54.580882  199488 pod_ready.go:104] pod "coredns-66bc5c9577-tccns" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 06 19:54:45 embed-certs-830393 crio[840]: time="2025-10-06T19:54:45.888079432Z" level=info msg="Created container 4806679465c561aacec6375608afbe8fce789e275a694a86671de46fc80fb890: kube-system/coredns-66bc5c9577-8k4cq/coredns" id=343d7338-3248-43fa-9db6-78d622c0f9b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:54:45 embed-certs-830393 crio[840]: time="2025-10-06T19:54:45.889349149Z" level=info msg="Starting container: 4806679465c561aacec6375608afbe8fce789e275a694a86671de46fc80fb890" id=66a988ad-25cb-4560-8079-272f2ff47b0d name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 19:54:45 embed-certs-830393 crio[840]: time="2025-10-06T19:54:45.894361267Z" level=info msg="Started container" PID=1733 containerID=4806679465c561aacec6375608afbe8fce789e275a694a86671de46fc80fb890 description=kube-system/coredns-66bc5c9577-8k4cq/coredns id=66a988ad-25cb-4560-8079-272f2ff47b0d name=/runtime.v1.RuntimeService/StartContainer sandboxID=3fec1ff0830ed3eccbf9455228b1e880a8f4a55ac0c39bb3ca50241a098f3625
	Oct 06 19:54:48 embed-certs-830393 crio[840]: time="2025-10-06T19:54:48.558559966Z" level=info msg="Running pod sandbox: default/busybox/POD" id=2819f25d-7086-4301-b7b6-32c7656e888d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 19:54:48 embed-certs-830393 crio[840]: time="2025-10-06T19:54:48.558632279Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:54:48 embed-certs-830393 crio[840]: time="2025-10-06T19:54:48.569314643Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:221fd8a51f6cf36b7e8929c5d7ac3279ad9cf2cbcf29b32926efc899cb7654b6 UID:ce5b9cbf-2167-4e11-9e30-7b122bb80999 NetNS:/var/run/netns/6baf4a87-7f52-4227-b1a2-73f67eea39a4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012cb78}] Aliases:map[]}"
	Oct 06 19:54:48 embed-certs-830393 crio[840]: time="2025-10-06T19:54:48.569353126Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 06 19:54:48 embed-certs-830393 crio[840]: time="2025-10-06T19:54:48.589354828Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:221fd8a51f6cf36b7e8929c5d7ac3279ad9cf2cbcf29b32926efc899cb7654b6 UID:ce5b9cbf-2167-4e11-9e30-7b122bb80999 NetNS:/var/run/netns/6baf4a87-7f52-4227-b1a2-73f67eea39a4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012cb78}] Aliases:map[]}"
	Oct 06 19:54:48 embed-certs-830393 crio[840]: time="2025-10-06T19:54:48.58951296Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 06 19:54:48 embed-certs-830393 crio[840]: time="2025-10-06T19:54:48.594377449Z" level=info msg="Ran pod sandbox 221fd8a51f6cf36b7e8929c5d7ac3279ad9cf2cbcf29b32926efc899cb7654b6 with infra container: default/busybox/POD" id=2819f25d-7086-4301-b7b6-32c7656e888d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 19:54:48 embed-certs-830393 crio[840]: time="2025-10-06T19:54:48.595557925Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8e8e30ba-ba80-4313-ae2f-4d68b68b5646 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:54:48 embed-certs-830393 crio[840]: time="2025-10-06T19:54:48.595731737Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=8e8e30ba-ba80-4313-ae2f-4d68b68b5646 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:54:48 embed-certs-830393 crio[840]: time="2025-10-06T19:54:48.595773617Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=8e8e30ba-ba80-4313-ae2f-4d68b68b5646 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:54:48 embed-certs-830393 crio[840]: time="2025-10-06T19:54:48.59734953Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6bce2a12-dcf0-4e70-ac1f-780bcf1fbd56 name=/runtime.v1.ImageService/PullImage
	Oct 06 19:54:48 embed-certs-830393 crio[840]: time="2025-10-06T19:54:48.598908138Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 06 19:54:50 embed-certs-830393 crio[840]: time="2025-10-06T19:54:50.587448226Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=6bce2a12-dcf0-4e70-ac1f-780bcf1fbd56 name=/runtime.v1.ImageService/PullImage
	Oct 06 19:54:50 embed-certs-830393 crio[840]: time="2025-10-06T19:54:50.588536287Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ee1c60c3-ca63-4914-8f70-cec789df64f1 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:54:50 embed-certs-830393 crio[840]: time="2025-10-06T19:54:50.592734507Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4b4894b9-82a2-4c57-aa4c-77714ae647f4 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:54:50 embed-certs-830393 crio[840]: time="2025-10-06T19:54:50.599950324Z" level=info msg="Creating container: default/busybox/busybox" id=9e9773fc-34c1-48db-ab8d-a3f836874fcd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:54:50 embed-certs-830393 crio[840]: time="2025-10-06T19:54:50.600714826Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:54:50 embed-certs-830393 crio[840]: time="2025-10-06T19:54:50.605433318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:54:50 embed-certs-830393 crio[840]: time="2025-10-06T19:54:50.605873252Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:54:50 embed-certs-830393 crio[840]: time="2025-10-06T19:54:50.619327448Z" level=info msg="Created container 8403786304a094fa437d97cf596d32da7833ff15ca929eb7d0c669ed92371d7b: default/busybox/busybox" id=9e9773fc-34c1-48db-ab8d-a3f836874fcd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:54:50 embed-certs-830393 crio[840]: time="2025-10-06T19:54:50.622341574Z" level=info msg="Starting container: 8403786304a094fa437d97cf596d32da7833ff15ca929eb7d0c669ed92371d7b" id=023b8c88-79a8-4848-abf8-4c5c5610fd0e name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 19:54:50 embed-certs-830393 crio[840]: time="2025-10-06T19:54:50.625875799Z" level=info msg="Started container" PID=1787 containerID=8403786304a094fa437d97cf596d32da7833ff15ca929eb7d0c669ed92371d7b description=default/busybox/busybox id=023b8c88-79a8-4848-abf8-4c5c5610fd0e name=/runtime.v1.RuntimeService/StartContainer sandboxID=221fd8a51f6cf36b7e8929c5d7ac3279ad9cf2cbcf29b32926efc899cb7654b6
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	8403786304a09       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   221fd8a51f6cf       busybox                                      default
	4806679465c56       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   3fec1ff0830ed       coredns-66bc5c9577-8k4cq                     kube-system
	421f272ee62d6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   077e8656c2138       storage-provisioner                          kube-system
	6495809c00259       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   783739f78b506       kindnet-g7jnc                                kube-system
	cf5b241d952df       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   516df751fbefc       kube-proxy-xl5tt                             kube-system
	818024cc83c33       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   41ef91a6feed2       etcd-embed-certs-830393                      kube-system
	b666de65dc849       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   debf3d263abde       kube-scheduler-embed-certs-830393            kube-system
	0b28b8e59a959       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   abbf1a1772e1f       kube-apiserver-embed-certs-830393            kube-system
	26c8ca0eff383       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   b8cc9ddce110b       kube-controller-manager-embed-certs-830393   kube-system
	
	
	==> coredns [4806679465c561aacec6375608afbe8fce789e275a694a86671de46fc80fb890] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45494 - 18945 "HINFO IN 5267498162128051648.9016797331383767357. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.073343287s
	
	
	==> describe nodes <==
	Name:               embed-certs-830393
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-830393
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81
	                    minikube.k8s.io/name=embed-certs-830393
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_06T19_53_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 06 Oct 2025 19:53:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-830393
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 06 Oct 2025 19:54:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 06 Oct 2025 19:54:45 +0000   Mon, 06 Oct 2025 19:53:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 06 Oct 2025 19:54:45 +0000   Mon, 06 Oct 2025 19:53:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 06 Oct 2025 19:54:45 +0000   Mon, 06 Oct 2025 19:53:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 06 Oct 2025 19:54:45 +0000   Mon, 06 Oct 2025 19:54:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-830393
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 1d3ffb0a025147b08e7aa6bae4783f13
	  System UUID:                f887c677-54f6-492d-93f4-e65ae4538988
	  Boot ID:                    28123a58-a718-41b5-bac7-da83ff2d3a0d
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-8k4cq                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-embed-certs-830393                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-g7jnc                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-embed-certs-830393             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-embed-certs-830393    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-xl5tt                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-embed-certs-830393             100m (5%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 53s                kube-proxy       
	  Normal   NodeHasSufficientMemory  69s (x8 over 69s)  kubelet          Node embed-certs-830393 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    69s (x8 over 69s)  kubelet          Node embed-certs-830393 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     69s (x8 over 69s)  kubelet          Node embed-certs-830393 status is now: NodeHasSufficientPID
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s                kubelet          Node embed-certs-830393 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s                kubelet          Node embed-certs-830393 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s                kubelet          Node embed-certs-830393 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node embed-certs-830393 event: Registered Node embed-certs-830393 in Controller
	  Normal   NodeReady                14s                kubelet          Node embed-certs-830393 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 6 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:23] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:26] overlayfs: idmapped layers are currently not supported
	[ +26.009516] overlayfs: idmapped layers are currently not supported
	[  +1.884868] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:28] overlayfs: idmapped layers are currently not supported
	[ +38.864395] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:29] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:30] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:31] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:35] overlayfs: idmapped layers are currently not supported
	[ +30.685217] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:37] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:39] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:41] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:48] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:49] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:50] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:52] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:54] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [818024cc83c33895c4801f5342bdc213eb5932bd84fb22660dde84c48ade6829] <==
	{"level":"warn","ts":"2025-10-06T19:53:53.487405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:53.507347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:53.539015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:53.548464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:53.565707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:53.595887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:53.620946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:53.642107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:53.653628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:53.679415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:53.726150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:53.771984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:53.816393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:53.829586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:53.867512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:53.897552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:53.934698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:53.955930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:53.980940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:54.014128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:54.048277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:54.076230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:54.112433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:54.123280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:53:54.286540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44656","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:54:59 up  1:37,  0 user,  load average: 2.56, 2.20, 1.85
	Linux embed-certs-830393 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6495809c00259d4604df97e16061b46fec9968c2da7ba28f5c0513748ed40295] <==
	I1006 19:54:04.846893       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1006 19:54:04.851927       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1006 19:54:04.852084       1 main.go:148] setting mtu 1500 for CNI 
	I1006 19:54:04.852096       1 main.go:178] kindnetd IP family: "ipv4"
	I1006 19:54:04.852112       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-06T19:54:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1006 19:54:05.055197       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1006 19:54:05.055244       1 controller.go:381] "Waiting for informer caches to sync"
	I1006 19:54:05.055267       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1006 19:54:05.056352       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1006 19:54:35.055952       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1006 19:54:35.056081       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1006 19:54:35.056183       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1006 19:54:35.056270       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1006 19:54:36.456386       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1006 19:54:36.456494       1 metrics.go:72] Registering metrics
	I1006 19:54:36.456597       1 controller.go:711] "Syncing nftables rules"
	I1006 19:54:45.095734       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1006 19:54:45.095877       1 main.go:301] handling current node
	I1006 19:54:55.055242       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1006 19:54:55.055287       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0b28b8e59a959945df0f3e11a0ee1db1ffab6f40dc7faf89536f63e08661cbf0] <==
	E1006 19:53:56.157372       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1006 19:53:56.173701       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1006 19:53:56.176306       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1006 19:53:56.187235       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1006 19:53:56.203588       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1006 19:53:56.229038       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1006 19:53:56.401096       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1006 19:53:56.485311       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1006 19:53:56.518142       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1006 19:53:56.518255       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1006 19:53:57.610744       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1006 19:53:57.680209       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1006 19:53:57.796794       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1006 19:53:57.871281       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1006 19:53:57.876541       1 controller.go:667] quota admission added evaluator for: endpoints
	I1006 19:53:57.887508       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1006 19:53:58.126931       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1006 19:53:58.659026       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1006 19:53:58.758667       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1006 19:53:58.821297       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1006 19:54:03.929865       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1006 19:54:04.152986       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1006 19:54:04.164380       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1006 19:54:04.203574       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1006 19:54:57.389313       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:49010: use of closed network connection
	
	
	==> kube-controller-manager [26c8ca0eff3835b4be1c83d519cbba239936c87b345c3c36c95f0c8eee7f06a7] <==
	I1006 19:54:03.127299       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1006 19:54:03.127669       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1006 19:54:03.128641       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1006 19:54:03.133175       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1006 19:54:03.143447       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1006 19:54:03.155627       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1006 19:54:03.155638       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1006 19:54:03.165220       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 19:54:03.173745       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 19:54:03.173777       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1006 19:54:03.173785       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1006 19:54:03.174028       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1006 19:54:03.175278       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1006 19:54:03.175401       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1006 19:54:03.175592       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1006 19:54:03.176722       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1006 19:54:03.176915       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1006 19:54:03.176976       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1006 19:54:03.177036       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-830393"
	I1006 19:54:03.177075       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1006 19:54:03.180563       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1006 19:54:03.185182       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1006 19:54:03.185335       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1006 19:54:03.185553       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1006 19:54:48.183735       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [cf5b241d952df60051321efa45141c31a87b92d080fb6ec3a068b675fae2e1c0] <==
	I1006 19:54:04.936006       1 server_linux.go:53] "Using iptables proxy"
	I1006 19:54:05.088613       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1006 19:54:05.189144       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1006 19:54:05.189185       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1006 19:54:05.189254       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1006 19:54:05.219049       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 19:54:05.219167       1 server_linux.go:132] "Using iptables Proxier"
	I1006 19:54:05.224018       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1006 19:54:05.224573       1 server.go:527] "Version info" version="v1.34.1"
	I1006 19:54:05.224628       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:54:05.225853       1 config.go:200] "Starting service config controller"
	I1006 19:54:05.225914       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1006 19:54:05.225963       1 config.go:106] "Starting endpoint slice config controller"
	I1006 19:54:05.225991       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1006 19:54:05.226026       1 config.go:403] "Starting serviceCIDR config controller"
	I1006 19:54:05.226050       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1006 19:54:05.226766       1 config.go:309] "Starting node config controller"
	I1006 19:54:05.229648       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1006 19:54:05.229739       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1006 19:54:05.326387       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1006 19:54:05.326422       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1006 19:54:05.326461       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b666de65dc8493b10e69be5eece5a37e70b330677e5c9517df6026f23c438800] <==
	I1006 19:53:56.270417       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1006 19:53:56.273127       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1006 19:53:56.286970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1006 19:53:56.329078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1006 19:53:56.329840       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1006 19:53:56.331915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1006 19:53:56.332536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1006 19:53:56.332683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1006 19:53:56.334120       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1006 19:53:56.336237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1006 19:53:56.336438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1006 19:53:56.336525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1006 19:53:56.336620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1006 19:53:56.339416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1006 19:53:56.340556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1006 19:53:56.341254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1006 19:53:56.341462       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1006 19:53:56.341502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1006 19:53:56.341608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1006 19:53:56.341643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1006 19:53:56.343234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1006 19:53:57.140138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1006 19:53:57.234449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1006 19:53:57.327666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1006 19:54:00.068866       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 06 19:54:03 embed-certs-830393 kubelet[1307]: I1006 19:54:03.122069    1307 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 06 19:54:03 embed-certs-830393 kubelet[1307]: I1006 19:54:03.122838    1307 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 06 19:54:04 embed-certs-830393 kubelet[1307]: I1006 19:54:04.372442    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/75361417-428d-4ea5-89ad-4570024b8916-kube-proxy\") pod \"kube-proxy-xl5tt\" (UID: \"75361417-428d-4ea5-89ad-4570024b8916\") " pod="kube-system/kube-proxy-xl5tt"
	Oct 06 19:54:04 embed-certs-830393 kubelet[1307]: I1006 19:54:04.372490    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f869226-920a-4722-aa82-308466e32e59-xtables-lock\") pod \"kindnet-g7jnc\" (UID: \"4f869226-920a-4722-aa82-308466e32e59\") " pod="kube-system/kindnet-g7jnc"
	Oct 06 19:54:04 embed-certs-830393 kubelet[1307]: I1006 19:54:04.372509    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4f869226-920a-4722-aa82-308466e32e59-cni-cfg\") pod \"kindnet-g7jnc\" (UID: \"4f869226-920a-4722-aa82-308466e32e59\") " pod="kube-system/kindnet-g7jnc"
	Oct 06 19:54:04 embed-certs-830393 kubelet[1307]: I1006 19:54:04.372528    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75361417-428d-4ea5-89ad-4570024b8916-lib-modules\") pod \"kube-proxy-xl5tt\" (UID: \"75361417-428d-4ea5-89ad-4570024b8916\") " pod="kube-system/kube-proxy-xl5tt"
	Oct 06 19:54:04 embed-certs-830393 kubelet[1307]: I1006 19:54:04.372545    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f869226-920a-4722-aa82-308466e32e59-lib-modules\") pod \"kindnet-g7jnc\" (UID: \"4f869226-920a-4722-aa82-308466e32e59\") " pod="kube-system/kindnet-g7jnc"
	Oct 06 19:54:04 embed-certs-830393 kubelet[1307]: I1006 19:54:04.372564    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75361417-428d-4ea5-89ad-4570024b8916-xtables-lock\") pod \"kube-proxy-xl5tt\" (UID: \"75361417-428d-4ea5-89ad-4570024b8916\") " pod="kube-system/kube-proxy-xl5tt"
	Oct 06 19:54:04 embed-certs-830393 kubelet[1307]: I1006 19:54:04.372581    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqdhq\" (UniqueName: \"kubernetes.io/projected/75361417-428d-4ea5-89ad-4570024b8916-kube-api-access-tqdhq\") pod \"kube-proxy-xl5tt\" (UID: \"75361417-428d-4ea5-89ad-4570024b8916\") " pod="kube-system/kube-proxy-xl5tt"
	Oct 06 19:54:04 embed-certs-830393 kubelet[1307]: I1006 19:54:04.372602    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d9vd\" (UniqueName: \"kubernetes.io/projected/4f869226-920a-4722-aa82-308466e32e59-kube-api-access-6d9vd\") pod \"kindnet-g7jnc\" (UID: \"4f869226-920a-4722-aa82-308466e32e59\") " pod="kube-system/kindnet-g7jnc"
	Oct 06 19:54:04 embed-certs-830393 kubelet[1307]: I1006 19:54:04.577044    1307 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 06 19:54:04 embed-certs-830393 kubelet[1307]: W1006 19:54:04.653513    1307 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/db0504489522a7f76847d7ee94754fe876f97596d0aefbe180d9ae204c670a0a/crio-783739f78b50635a4c47fb707d4cf43c490705f1d8ada3dae81c684f496a319b WatchSource:0}: Error finding container 783739f78b50635a4c47fb707d4cf43c490705f1d8ada3dae81c684f496a319b: Status 404 returned error can't find the container with id 783739f78b50635a4c47fb707d4cf43c490705f1d8ada3dae81c684f496a319b
	Oct 06 19:54:05 embed-certs-830393 kubelet[1307]: I1006 19:54:05.076450    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xl5tt" podStartSLOduration=1.076418989 podStartE2EDuration="1.076418989s" podCreationTimestamp="2025-10-06 19:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 19:54:05.076100165 +0000 UTC m=+6.523738370" watchObservedRunningTime="2025-10-06 19:54:05.076418989 +0000 UTC m=+6.524057194"
	Oct 06 19:54:05 embed-certs-830393 kubelet[1307]: I1006 19:54:05.104149    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-g7jnc" podStartSLOduration=1.104129195 podStartE2EDuration="1.104129195s" podCreationTimestamp="2025-10-06 19:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 19:54:05.103959731 +0000 UTC m=+6.551597928" watchObservedRunningTime="2025-10-06 19:54:05.104129195 +0000 UTC m=+6.551767391"
	Oct 06 19:54:45 embed-certs-830393 kubelet[1307]: I1006 19:54:45.411007    1307 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 06 19:54:45 embed-certs-830393 kubelet[1307]: I1006 19:54:45.599970    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c6173113-55a1-44a2-b622-34ba6868ea4c-tmp\") pod \"storage-provisioner\" (UID: \"c6173113-55a1-44a2-b622-34ba6868ea4c\") " pod="kube-system/storage-provisioner"
	Oct 06 19:54:45 embed-certs-830393 kubelet[1307]: I1006 19:54:45.600035    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ltcb\" (UniqueName: \"kubernetes.io/projected/c6173113-55a1-44a2-b622-34ba6868ea4c-kube-api-access-7ltcb\") pod \"storage-provisioner\" (UID: \"c6173113-55a1-44a2-b622-34ba6868ea4c\") " pod="kube-system/storage-provisioner"
	Oct 06 19:54:45 embed-certs-830393 kubelet[1307]: I1006 19:54:45.600063    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6b9c4d9-313c-467e-b448-9867361a42fb-config-volume\") pod \"coredns-66bc5c9577-8k4cq\" (UID: \"e6b9c4d9-313c-467e-b448-9867361a42fb\") " pod="kube-system/coredns-66bc5c9577-8k4cq"
	Oct 06 19:54:45 embed-certs-830393 kubelet[1307]: I1006 19:54:45.600083    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46m7w\" (UniqueName: \"kubernetes.io/projected/e6b9c4d9-313c-467e-b448-9867361a42fb-kube-api-access-46m7w\") pod \"coredns-66bc5c9577-8k4cq\" (UID: \"e6b9c4d9-313c-467e-b448-9867361a42fb\") " pod="kube-system/coredns-66bc5c9577-8k4cq"
	Oct 06 19:54:45 embed-certs-830393 kubelet[1307]: W1006 19:54:45.777044    1307 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/db0504489522a7f76847d7ee94754fe876f97596d0aefbe180d9ae204c670a0a/crio-077e8656c2138d0aa482d87fecae9c2d4bab87f8b8557067d9a8de8cf69a81d8 WatchSource:0}: Error finding container 077e8656c2138d0aa482d87fecae9c2d4bab87f8b8557067d9a8de8cf69a81d8: Status 404 returned error can't find the container with id 077e8656c2138d0aa482d87fecae9c2d4bab87f8b8557067d9a8de8cf69a81d8
	Oct 06 19:54:46 embed-certs-830393 kubelet[1307]: I1006 19:54:46.205925    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8k4cq" podStartSLOduration=42.205903785 podStartE2EDuration="42.205903785s" podCreationTimestamp="2025-10-06 19:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 19:54:46.18488449 +0000 UTC m=+47.632522703" watchObservedRunningTime="2025-10-06 19:54:46.205903785 +0000 UTC m=+47.653541982"
	Oct 06 19:54:46 embed-certs-830393 kubelet[1307]: I1006 19:54:46.226568    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.226544997 podStartE2EDuration="41.226544997s" podCreationTimestamp="2025-10-06 19:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 19:54:46.206524834 +0000 UTC m=+47.654163039" watchObservedRunningTime="2025-10-06 19:54:46.226544997 +0000 UTC m=+47.674183194"
	Oct 06 19:54:48 embed-certs-830393 kubelet[1307]: I1006 19:54:48.322606    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dgbn\" (UniqueName: \"kubernetes.io/projected/ce5b9cbf-2167-4e11-9e30-7b122bb80999-kube-api-access-2dgbn\") pod \"busybox\" (UID: \"ce5b9cbf-2167-4e11-9e30-7b122bb80999\") " pod="default/busybox"
	Oct 06 19:54:48 embed-certs-830393 kubelet[1307]: W1006 19:54:48.593437    1307 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/db0504489522a7f76847d7ee94754fe876f97596d0aefbe180d9ae204c670a0a/crio-221fd8a51f6cf36b7e8929c5d7ac3279ad9cf2cbcf29b32926efc899cb7654b6 WatchSource:0}: Error finding container 221fd8a51f6cf36b7e8929c5d7ac3279ad9cf2cbcf29b32926efc899cb7654b6: Status 404 returned error can't find the container with id 221fd8a51f6cf36b7e8929c5d7ac3279ad9cf2cbcf29b32926efc899cb7654b6
	Oct 06 19:54:51 embed-certs-830393 kubelet[1307]: I1006 19:54:51.199555    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.2063685259999999 podStartE2EDuration="3.19953501s" podCreationTimestamp="2025-10-06 19:54:48 +0000 UTC" firstStartedPulling="2025-10-06 19:54:48.596711012 +0000 UTC m=+50.044349209" lastFinishedPulling="2025-10-06 19:54:50.589877488 +0000 UTC m=+52.037515693" observedRunningTime="2025-10-06 19:54:51.199140681 +0000 UTC m=+52.646779230" watchObservedRunningTime="2025-10-06 19:54:51.19953501 +0000 UTC m=+52.647173206"
	
	
	==> storage-provisioner [421f272ee62d65274e40de13fd4ef8af6401f964a45ce842c9a6cb4b0774e3ad] <==
	I1006 19:54:45.854928       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1006 19:54:45.867880       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1006 19:54:45.867938       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1006 19:54:45.872602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:54:45.880922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1006 19:54:45.881089       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1006 19:54:45.881447       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a38e7b44-d976-40ca-8b2f-247b56a0f9cb", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-830393_daba53b5-7d06-48b5-8eeb-7552324833ea became leader
	I1006 19:54:45.881556       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-830393_daba53b5-7d06-48b5-8eeb-7552324833ea!
	W1006 19:54:45.905937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:54:45.937509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1006 19:54:45.982226       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-830393_daba53b5-7d06-48b5-8eeb-7552324833ea!
	W1006 19:54:47.940923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:54:47.946399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:54:49.950184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:54:49.955398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:54:51.959321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:54:51.964271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:54:53.967842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:54:53.973247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:54:55.977255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:54:55.984406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:54:57.988109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:54:57.993373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-830393 -n embed-certs-830393
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-830393 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-314275 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-314275 --alsologtostderr -v=1: exit status 80 (2.612130857s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-314275 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 19:55:12.721283  202623 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:55:12.721402  202623 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:55:12.721407  202623 out.go:374] Setting ErrFile to fd 2...
	I1006 19:55:12.721411  202623 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:55:12.721666  202623 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:55:12.722039  202623 out.go:368] Setting JSON to false
	I1006 19:55:12.722057  202623 mustload.go:65] Loading cluster: no-preload-314275
	I1006 19:55:12.722520  202623 config.go:182] Loaded profile config "no-preload-314275": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:55:12.723086  202623 cli_runner.go:164] Run: docker container inspect no-preload-314275 --format={{.State.Status}}
	I1006 19:55:12.752334  202623 host.go:66] Checking if "no-preload-314275" exists ...
	I1006 19:55:12.752640  202623 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:55:12.901925  202623 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:55:12.881148555 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:55:12.902790  202623 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-314275 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1006 19:55:12.906644  202623 out.go:179] * Pausing node no-preload-314275 ... 
	I1006 19:55:12.908020  202623 host.go:66] Checking if "no-preload-314275" exists ...
	I1006 19:55:12.908359  202623 ssh_runner.go:195] Run: systemctl --version
	I1006 19:55:12.908411  202623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-314275
	I1006 19:55:12.930912  202623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/no-preload-314275/id_rsa Username:docker}
	I1006 19:55:13.031979  202623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:55:13.073405  202623 pause.go:51] kubelet running: true
	I1006 19:55:13.073467  202623 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1006 19:55:13.400216  202623 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1006 19:55:13.400339  202623 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1006 19:55:13.498728  202623 cri.go:89] found id: "47a18794a163924d8e7f1733f13eb816ab22d7b97358bc2c00b3efea43cb69cc"
	I1006 19:55:13.498750  202623 cri.go:89] found id: "d345bd4bc027cf1668362282ac58b7897b3db88e8a7ebbc3d85ef30fb4ffcf74"
	I1006 19:55:13.498754  202623 cri.go:89] found id: "51aa04147551728423412f83b162475268c3ea26736624b38098bb9292184c3a"
	I1006 19:55:13.498759  202623 cri.go:89] found id: "45f69ddd3c69b01b9fe25ae44f01f57b7f556ed19c8d7d38d509adc8fa4ce0a4"
	I1006 19:55:13.498762  202623 cri.go:89] found id: "27525ce70f1fdc3814390274d664d5a08475d03bc6ea2fe6c3c6391cd6fe9260"
	I1006 19:55:13.498765  202623 cri.go:89] found id: "e3421553bee92bb445fc71dc28a9e73f44a2e2fdf3f7ebc32bcf8e628b6f1a02"
	I1006 19:55:13.498768  202623 cri.go:89] found id: "730df4b08913c9671db866cfd407ba257e98404db9dacaf36ebb44dfbc61ab38"
	I1006 19:55:13.498771  202623 cri.go:89] found id: "186d0a80c923460b51dd7d0acf3cdfa8b7d291a0ad41f9d28777566a35ac63c5"
	I1006 19:55:13.498774  202623 cri.go:89] found id: "ffc6fa832b5c0989f18a443f577b66e36306a055f9f4dda54f1184a47f1c75de"
	I1006 19:55:13.498781  202623 cri.go:89] found id: "231abe139bb7ae75ba79b083238e5ef05c24806414c17ad3f26f0fc8b871d4dc"
	I1006 19:55:13.498784  202623 cri.go:89] found id: "c02a55032bbff6f8848c6ea1b9ad39ce634e876f7d260d85021fe5ee875a7b0f"
	I1006 19:55:13.498787  202623 cri.go:89] found id: ""
	I1006 19:55:13.498844  202623 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 19:55:13.515185  202623 retry.go:31] will retry after 171.355189ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:55:13Z" level=error msg="open /run/runc: no such file or directory"
	I1006 19:55:13.687609  202623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:55:13.706032  202623 pause.go:51] kubelet running: false
	I1006 19:55:13.706098  202623 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1006 19:55:13.960489  202623 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1006 19:55:13.960568  202623 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1006 19:55:14.041589  202623 cri.go:89] found id: "47a18794a163924d8e7f1733f13eb816ab22d7b97358bc2c00b3efea43cb69cc"
	I1006 19:55:14.041613  202623 cri.go:89] found id: "d345bd4bc027cf1668362282ac58b7897b3db88e8a7ebbc3d85ef30fb4ffcf74"
	I1006 19:55:14.041618  202623 cri.go:89] found id: "51aa04147551728423412f83b162475268c3ea26736624b38098bb9292184c3a"
	I1006 19:55:14.041622  202623 cri.go:89] found id: "45f69ddd3c69b01b9fe25ae44f01f57b7f556ed19c8d7d38d509adc8fa4ce0a4"
	I1006 19:55:14.041625  202623 cri.go:89] found id: "27525ce70f1fdc3814390274d664d5a08475d03bc6ea2fe6c3c6391cd6fe9260"
	I1006 19:55:14.041629  202623 cri.go:89] found id: "e3421553bee92bb445fc71dc28a9e73f44a2e2fdf3f7ebc32bcf8e628b6f1a02"
	I1006 19:55:14.041632  202623 cri.go:89] found id: "730df4b08913c9671db866cfd407ba257e98404db9dacaf36ebb44dfbc61ab38"
	I1006 19:55:14.041635  202623 cri.go:89] found id: "186d0a80c923460b51dd7d0acf3cdfa8b7d291a0ad41f9d28777566a35ac63c5"
	I1006 19:55:14.041644  202623 cri.go:89] found id: "ffc6fa832b5c0989f18a443f577b66e36306a055f9f4dda54f1184a47f1c75de"
	I1006 19:55:14.041651  202623 cri.go:89] found id: "231abe139bb7ae75ba79b083238e5ef05c24806414c17ad3f26f0fc8b871d4dc"
	I1006 19:55:14.041655  202623 cri.go:89] found id: "c02a55032bbff6f8848c6ea1b9ad39ce634e876f7d260d85021fe5ee875a7b0f"
	I1006 19:55:14.041658  202623 cri.go:89] found id: ""
	I1006 19:55:14.041719  202623 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 19:55:14.053274  202623 retry.go:31] will retry after 327.799825ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:55:14Z" level=error msg="open /run/runc: no such file or directory"
	I1006 19:55:14.381889  202623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:55:14.394948  202623 pause.go:51] kubelet running: false
	I1006 19:55:14.395041  202623 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1006 19:55:14.555404  202623 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1006 19:55:14.555486  202623 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1006 19:55:14.627465  202623 cri.go:89] found id: "47a18794a163924d8e7f1733f13eb816ab22d7b97358bc2c00b3efea43cb69cc"
	I1006 19:55:14.627484  202623 cri.go:89] found id: "d345bd4bc027cf1668362282ac58b7897b3db88e8a7ebbc3d85ef30fb4ffcf74"
	I1006 19:55:14.627498  202623 cri.go:89] found id: "51aa04147551728423412f83b162475268c3ea26736624b38098bb9292184c3a"
	I1006 19:55:14.627501  202623 cri.go:89] found id: "45f69ddd3c69b01b9fe25ae44f01f57b7f556ed19c8d7d38d509adc8fa4ce0a4"
	I1006 19:55:14.627505  202623 cri.go:89] found id: "27525ce70f1fdc3814390274d664d5a08475d03bc6ea2fe6c3c6391cd6fe9260"
	I1006 19:55:14.627509  202623 cri.go:89] found id: "e3421553bee92bb445fc71dc28a9e73f44a2e2fdf3f7ebc32bcf8e628b6f1a02"
	I1006 19:55:14.627512  202623 cri.go:89] found id: "730df4b08913c9671db866cfd407ba257e98404db9dacaf36ebb44dfbc61ab38"
	I1006 19:55:14.627515  202623 cri.go:89] found id: "186d0a80c923460b51dd7d0acf3cdfa8b7d291a0ad41f9d28777566a35ac63c5"
	I1006 19:55:14.627518  202623 cri.go:89] found id: "ffc6fa832b5c0989f18a443f577b66e36306a055f9f4dda54f1184a47f1c75de"
	I1006 19:55:14.627525  202623 cri.go:89] found id: "231abe139bb7ae75ba79b083238e5ef05c24806414c17ad3f26f0fc8b871d4dc"
	I1006 19:55:14.627528  202623 cri.go:89] found id: "c02a55032bbff6f8848c6ea1b9ad39ce634e876f7d260d85021fe5ee875a7b0f"
	I1006 19:55:14.627531  202623 cri.go:89] found id: ""
	I1006 19:55:14.627580  202623 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 19:55:14.639388  202623 retry.go:31] will retry after 303.693261ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:55:14Z" level=error msg="open /run/runc: no such file or directory"
	I1006 19:55:14.943911  202623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:55:14.957701  202623 pause.go:51] kubelet running: false
	I1006 19:55:14.957775  202623 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1006 19:55:15.168677  202623 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1006 19:55:15.168765  202623 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1006 19:55:15.238567  202623 cri.go:89] found id: "47a18794a163924d8e7f1733f13eb816ab22d7b97358bc2c00b3efea43cb69cc"
	I1006 19:55:15.238589  202623 cri.go:89] found id: "d345bd4bc027cf1668362282ac58b7897b3db88e8a7ebbc3d85ef30fb4ffcf74"
	I1006 19:55:15.238593  202623 cri.go:89] found id: "51aa04147551728423412f83b162475268c3ea26736624b38098bb9292184c3a"
	I1006 19:55:15.238597  202623 cri.go:89] found id: "45f69ddd3c69b01b9fe25ae44f01f57b7f556ed19c8d7d38d509adc8fa4ce0a4"
	I1006 19:55:15.238600  202623 cri.go:89] found id: "27525ce70f1fdc3814390274d664d5a08475d03bc6ea2fe6c3c6391cd6fe9260"
	I1006 19:55:15.238605  202623 cri.go:89] found id: "e3421553bee92bb445fc71dc28a9e73f44a2e2fdf3f7ebc32bcf8e628b6f1a02"
	I1006 19:55:15.238608  202623 cri.go:89] found id: "730df4b08913c9671db866cfd407ba257e98404db9dacaf36ebb44dfbc61ab38"
	I1006 19:55:15.238612  202623 cri.go:89] found id: "186d0a80c923460b51dd7d0acf3cdfa8b7d291a0ad41f9d28777566a35ac63c5"
	I1006 19:55:15.238616  202623 cri.go:89] found id: "ffc6fa832b5c0989f18a443f577b66e36306a055f9f4dda54f1184a47f1c75de"
	I1006 19:55:15.238622  202623 cri.go:89] found id: "231abe139bb7ae75ba79b083238e5ef05c24806414c17ad3f26f0fc8b871d4dc"
	I1006 19:55:15.238625  202623 cri.go:89] found id: "c02a55032bbff6f8848c6ea1b9ad39ce634e876f7d260d85021fe5ee875a7b0f"
	I1006 19:55:15.238628  202623 cri.go:89] found id: ""
	I1006 19:55:15.238677  202623 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 19:55:15.252891  202623 out.go:203] 
	W1006 19:55:15.254139  202623 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:55:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:55:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1006 19:55:15.254177  202623 out.go:285] * 
	* 
	W1006 19:55:15.259137  202623 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 19:55:15.260558  202623 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-314275 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-314275
helpers_test.go:243: (dbg) docker inspect no-preload-314275:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3b7c30b4fccfef647cdb0e7ecc61769f9844c54f8180f8640cdce7f83fc9c9ab",
	        "Created": "2025-10-06T19:52:30.053793791Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 199619,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T19:54:11.150258567Z",
	            "FinishedAt": "2025-10-06T19:54:10.274945641Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/3b7c30b4fccfef647cdb0e7ecc61769f9844c54f8180f8640cdce7f83fc9c9ab/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3b7c30b4fccfef647cdb0e7ecc61769f9844c54f8180f8640cdce7f83fc9c9ab/hostname",
	        "HostsPath": "/var/lib/docker/containers/3b7c30b4fccfef647cdb0e7ecc61769f9844c54f8180f8640cdce7f83fc9c9ab/hosts",
	        "LogPath": "/var/lib/docker/containers/3b7c30b4fccfef647cdb0e7ecc61769f9844c54f8180f8640cdce7f83fc9c9ab/3b7c30b4fccfef647cdb0e7ecc61769f9844c54f8180f8640cdce7f83fc9c9ab-json.log",
	        "Name": "/no-preload-314275",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-314275:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-314275",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3b7c30b4fccfef647cdb0e7ecc61769f9844c54f8180f8640cdce7f83fc9c9ab",
	                "LowerDir": "/var/lib/docker/overlay2/6baf105e7753dc1d2374b161686b0bc798240cf018581905f7dd4f21447d1a20-init/diff:/var/lib/docker/overlay2/950dad8a7486c25883a9845454dded3949e8f3a53166d005e6749ff210bcab80/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6baf105e7753dc1d2374b161686b0bc798240cf018581905f7dd4f21447d1a20/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6baf105e7753dc1d2374b161686b0bc798240cf018581905f7dd4f21447d1a20/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6baf105e7753dc1d2374b161686b0bc798240cf018581905f7dd4f21447d1a20/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-314275",
	                "Source": "/var/lib/docker/volumes/no-preload-314275/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-314275",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-314275",
	                "name.minikube.sigs.k8s.io": "no-preload-314275",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a9e94f2597555a477927c1e5aa4f70e73c451bf816ae06c83dddd3fa9af5c90b",
	            "SandboxKey": "/var/run/docker/netns/a9e94f259755",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-314275": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:90:b1:21:cf:4f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b693310dd981b3558dbfee81926e93addf9d9e76e4588123249599a4c1c5d16e",
	                    "EndpointID": "636c92ff5fa376eaa6498c6bb65fb7cb13930bad72b0e37032d7b86f71e16c63",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-314275",
	                        "3b7c30b4fccf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-314275 -n no-preload-314275
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-314275 -n no-preload-314275: exit status 2 (349.914437ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-314275 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-314275 logs -n 25: (1.350164064s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ cert-options-593131 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-593131    │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ ssh     │ -p cert-options-593131 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-593131    │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ delete  │ -p cert-options-593131                                                                                                                                                                                                                        │ cert-options-593131    │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ start   │ -p old-k8s-version-100545 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-100545 │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:50 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-100545 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-100545 │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │                     │
	│ stop    │ -p old-k8s-version-100545 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-100545 │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │ 06 Oct 25 19:51 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-100545 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-100545 │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │ 06 Oct 25 19:51 UTC │
	│ start   │ -p old-k8s-version-100545 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-100545 │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │ 06 Oct 25 19:52 UTC │
	│ start   │ -p cert-expiration-585086 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-585086 │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │ 06 Oct 25 19:53 UTC │
	│ image   │ old-k8s-version-100545 image list --format=json                                                                                                                                                                                               │ old-k8s-version-100545 │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:52 UTC │
	│ pause   │ -p old-k8s-version-100545 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-100545 │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │                     │
	│ delete  │ -p old-k8s-version-100545                                                                                                                                                                                                                     │ old-k8s-version-100545 │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:52 UTC │
	│ delete  │ -p old-k8s-version-100545                                                                                                                                                                                                                     │ old-k8s-version-100545 │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:52 UTC │
	│ start   │ -p no-preload-314275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-314275      │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:53 UTC │
	│ delete  │ -p cert-expiration-585086                                                                                                                                                                                                                     │ cert-expiration-585086 │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │ 06 Oct 25 19:53 UTC │
	│ start   │ -p embed-certs-830393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-830393     │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │ 06 Oct 25 19:54 UTC │
	│ addons  │ enable metrics-server -p no-preload-314275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-314275      │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │                     │
	│ stop    │ -p no-preload-314275 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-314275      │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │ 06 Oct 25 19:54 UTC │
	│ addons  │ enable dashboard -p no-preload-314275 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-314275      │ jenkins │ v1.37.0 │ 06 Oct 25 19:54 UTC │ 06 Oct 25 19:54 UTC │
	│ start   │ -p no-preload-314275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-314275      │ jenkins │ v1.37.0 │ 06 Oct 25 19:54 UTC │ 06 Oct 25 19:55 UTC │
	│ addons  │ enable metrics-server -p embed-certs-830393 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-830393     │ jenkins │ v1.37.0 │ 06 Oct 25 19:54 UTC │                     │
	│ stop    │ -p embed-certs-830393 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-830393     │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ addons  │ enable dashboard -p embed-certs-830393 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-830393     │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ pause   │ -p no-preload-314275 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-314275      │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │                     │
	│ start   │ -p embed-certs-830393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-830393     │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 19:55:12
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 19:55:12.724179  202624 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:55:12.724356  202624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:55:12.724368  202624 out.go:374] Setting ErrFile to fd 2...
	I1006 19:55:12.724373  202624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:55:12.724663  202624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:55:12.725084  202624 out.go:368] Setting JSON to false
	I1006 19:55:12.726722  202624 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5848,"bootTime":1759774665,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1006 19:55:12.726802  202624 start.go:140] virtualization:  
	I1006 19:55:12.728576  202624 out.go:179] * [embed-certs-830393] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 19:55:12.729994  202624 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 19:55:12.730143  202624 notify.go:220] Checking for updates...
	I1006 19:55:12.733383  202624 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 19:55:12.735319  202624 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:55:12.736619  202624 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	I1006 19:55:12.737559  202624 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 19:55:12.738918  202624 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 19:55:12.740596  202624 config.go:182] Loaded profile config "embed-certs-830393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:55:12.741188  202624 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 19:55:12.766378  202624 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 19:55:12.766513  202624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:55:12.901861  202624 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:55:12.881148555 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:55:12.901961  202624 docker.go:318] overlay module found
	I1006 19:55:12.903503  202624 out.go:179] * Using the docker driver based on existing profile
	I1006 19:55:12.905602  202624 start.go:304] selected driver: docker
	I1006 19:55:12.905624  202624 start.go:924] validating driver "docker" against &{Name:embed-certs-830393 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-830393 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:55:12.905725  202624 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 19:55:12.906473  202624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:55:12.996814  202624 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:55:12.986744934 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:55:12.997148  202624 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 19:55:12.997176  202624 cni.go:84] Creating CNI manager for ""
	I1006 19:55:12.997233  202624 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:55:12.997271  202624 start.go:348] cluster config:
	{Name:embed-certs-830393 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-830393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:55:12.999028  202624 out.go:179] * Starting "embed-certs-830393" primary control-plane node in "embed-certs-830393" cluster
	I1006 19:55:13.000150  202624 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 19:55:13.001241  202624 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 19:55:13.002592  202624 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:55:13.002640  202624 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1006 19:55:13.002652  202624 cache.go:58] Caching tarball of preloaded images
	I1006 19:55:13.002753  202624 preload.go:233] Found /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1006 19:55:13.002767  202624 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 19:55:13.002877  202624 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/config.json ...
	I1006 19:55:13.003096  202624 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 19:55:13.028432  202624 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 19:55:13.028455  202624 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 19:55:13.028475  202624 cache.go:232] Successfully downloaded all kic artifacts
	I1006 19:55:13.028498  202624 start.go:360] acquireMachinesLock for embed-certs-830393: {Name:mk9482698940ed15367c12951e7ada37afdeab68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 19:55:13.028567  202624 start.go:364] duration metric: took 51.8µs to acquireMachinesLock for "embed-certs-830393"
	I1006 19:55:13.028586  202624 start.go:96] Skipping create...Using existing machine configuration
	I1006 19:55:13.028597  202624 fix.go:54] fixHost starting: 
	I1006 19:55:13.028853  202624 cli_runner.go:164] Run: docker container inspect embed-certs-830393 --format={{.State.Status}}
	I1006 19:55:13.049508  202624 fix.go:112] recreateIfNeeded on embed-certs-830393: state=Stopped err=<nil>
	W1006 19:55:13.049536  202624 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Oct 06 19:54:55 no-preload-314275 crio[651]: time="2025-10-06T19:54:55.097616386Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6c95daab-098b-41fe-9dd1-0c0b152b3e13 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:54:55 no-preload-314275 crio[651]: time="2025-10-06T19:54:55.099081463Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=e58d96bb-a821-4d81-b88d-4e1d3486378e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:54:55 no-preload-314275 crio[651]: time="2025-10-06T19:54:55.09952248Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:54:55 no-preload-314275 crio[651]: time="2025-10-06T19:54:55.105370584Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:54:55 no-preload-314275 crio[651]: time="2025-10-06T19:54:55.105769959Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/09f811b89c6dba438a7238678f3d5a1af4dc025908036f9d11340bdc86536a50/merged/etc/passwd: no such file or directory"
	Oct 06 19:54:55 no-preload-314275 crio[651]: time="2025-10-06T19:54:55.10588791Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/09f811b89c6dba438a7238678f3d5a1af4dc025908036f9d11340bdc86536a50/merged/etc/group: no such file or directory"
	Oct 06 19:54:55 no-preload-314275 crio[651]: time="2025-10-06T19:54:55.106252487Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:54:55 no-preload-314275 crio[651]: time="2025-10-06T19:54:55.126663698Z" level=info msg="Created container 47a18794a163924d8e7f1733f13eb816ab22d7b97358bc2c00b3efea43cb69cc: kube-system/storage-provisioner/storage-provisioner" id=e58d96bb-a821-4d81-b88d-4e1d3486378e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:54:55 no-preload-314275 crio[651]: time="2025-10-06T19:54:55.128095461Z" level=info msg="Starting container: 47a18794a163924d8e7f1733f13eb816ab22d7b97358bc2c00b3efea43cb69cc" id=e1e32247-4914-45b7-b78d-a413fe1bed5f name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 19:54:55 no-preload-314275 crio[651]: time="2025-10-06T19:54:55.130169492Z" level=info msg="Started container" PID=1634 containerID=47a18794a163924d8e7f1733f13eb816ab22d7b97358bc2c00b3efea43cb69cc description=kube-system/storage-provisioner/storage-provisioner id=e1e32247-4914-45b7-b78d-a413fe1bed5f name=/runtime.v1.RuntimeService/StartContainer sandboxID=1c29fbaf5b58790aa2de37bbe6345e59ce5747651f13725803ab3d40648e9884
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.653098765Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.657181374Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.657216304Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.657247016Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.661153416Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.661189888Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.6612153Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.664449861Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.66455602Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.664596431Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.668153574Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.668191228Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.668217575Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.671489822Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.671528272Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	47a18794a1639       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           21 seconds ago      Running             storage-provisioner         2                   1c29fbaf5b587       storage-provisioner                          kube-system
	231abe139bb7a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   700ce2ed2056c       dashboard-metrics-scraper-6ffb444bf9-bhkxd   kubernetes-dashboard
	c02a55032bbff       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago      Running             kubernetes-dashboard        0                   e9dcfec14e580       kubernetes-dashboard-855c9754f9-k8dzl        kubernetes-dashboard
	11ac9c2aa86de       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   14138af1a07e3       busybox                                      default
	d345bd4bc027c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           51 seconds ago      Running             coredns                     1                   61d13e5c8fab6       coredns-66bc5c9577-tccns                     kube-system
	51aa041475517       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago      Running             kindnet-cni                 1                   74238da692c1b       kindnet-b6hb7                                kube-system
	45f69ddd3c69b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago      Running             kube-proxy                  1                   e56364414d284       kube-proxy-nr6pc                             kube-system
	27525ce70f1fd       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           52 seconds ago      Exited              storage-provisioner         1                   1c29fbaf5b587       storage-provisioner                          kube-system
	e3421553bee92       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           57 seconds ago      Running             kube-controller-manager     1                   1ac31b905990b       kube-controller-manager-no-preload-314275    kube-system
	730df4b08913c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           57 seconds ago      Running             kube-apiserver              1                   488bf9a14738f       kube-apiserver-no-preload-314275             kube-system
	186d0a80c9234       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           57 seconds ago      Running             etcd                        1                   d6155e6203aa3       etcd-no-preload-314275                       kube-system
	ffc6fa832b5c0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           57 seconds ago      Running             kube-scheduler              1                   e992eb2df8c41       kube-scheduler-no-preload-314275             kube-system
	
	
	==> coredns [d345bd4bc027cf1668362282ac58b7897b3db88e8a7ebbc3d85ef30fb4ffcf74] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59136 - 15011 "HINFO IN 2513846065025795589.4192107700506759617. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.005538461s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-314275
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-314275
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81
	                    minikube.k8s.io/name=no-preload-314275
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_06T19_53_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 06 Oct 2025 19:53:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-314275
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 06 Oct 2025 19:55:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 06 Oct 2025 19:54:53 +0000   Mon, 06 Oct 2025 19:53:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 06 Oct 2025 19:54:53 +0000   Mon, 06 Oct 2025 19:53:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 06 Oct 2025 19:54:53 +0000   Mon, 06 Oct 2025 19:53:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 06 Oct 2025 19:54:53 +0000   Mon, 06 Oct 2025 19:53:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-314275
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 217e20117b5041af99443eed96fc31f8
	  System UUID:                063eafb6-36b6-4179-b2e4-ad5bbf368dcb
	  Boot ID:                    28123a58-a718-41b5-bac7-da83ff2d3a0d
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-tccns                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     110s
	  kube-system                 etcd-no-preload-314275                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         117s
	  kube-system                 kindnet-b6hb7                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-no-preload-314275              250m (12%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-no-preload-314275     200m (10%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-nr6pc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-no-preload-314275              100m (5%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-bhkxd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-k8dzl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 108s                 kube-proxy       
	  Normal   Starting                 51s                  kube-proxy       
	  Warning  CgroupV1                 2m8s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m8s (x8 over 2m8s)  kubelet          Node no-preload-314275 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node no-preload-314275 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m8s (x8 over 2m8s)  kubelet          Node no-preload-314275 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  115s                 kubelet          Node no-preload-314275 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 115s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    115s                 kubelet          Node no-preload-314275 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     115s                 kubelet          Node no-preload-314275 status is now: NodeHasSufficientPID
	  Normal   Starting                 115s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           111s                 node-controller  Node no-preload-314275 event: Registered Node no-preload-314275 in Controller
	  Normal   NodeReady                94s                  kubelet          Node no-preload-314275 status is now: NodeReady
	  Normal   Starting                 59s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s (x8 over 59s)    kubelet          Node no-preload-314275 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x8 over 59s)    kubelet          Node no-preload-314275 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x8 over 59s)    kubelet          Node no-preload-314275 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                  node-controller  Node no-preload-314275 event: Registered Node no-preload-314275 in Controller
	
	
	==> dmesg <==
	[Oct 6 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:23] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:26] overlayfs: idmapped layers are currently not supported
	[ +26.009516] overlayfs: idmapped layers are currently not supported
	[  +1.884868] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:28] overlayfs: idmapped layers are currently not supported
	[ +38.864395] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:29] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:30] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:31] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:35] overlayfs: idmapped layers are currently not supported
	[ +30.685217] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:37] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:39] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:41] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:48] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:49] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:50] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:52] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:54] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [186d0a80c923460b51dd7d0acf3cdfa8b7d291a0ad41f9d28777566a35ac63c5] <==
	{"level":"warn","ts":"2025-10-06T19:54:21.203213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.222187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.242836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.263153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.275580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.290016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.327421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.332302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.349949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.380496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.400098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.418934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.436633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.456950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.473154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.505665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.520979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.543465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.564289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.588072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.607215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.638214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.691820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.719089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.801684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42216","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:55:16 up  1:37,  0 user,  load average: 2.10, 2.12, 1.83
	Linux no-preload-314275 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [51aa04147551728423412f83b162475268c3ea26736624b38098bb9292184c3a] <==
	I1006 19:54:24.459202       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1006 19:54:24.459545       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1006 19:54:24.459651       1 main.go:148] setting mtu 1500 for CNI 
	I1006 19:54:24.459662       1 main.go:178] kindnetd IP family: "ipv4"
	I1006 19:54:24.459674       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-06T19:54:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1006 19:54:24.652378       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1006 19:54:24.652452       1 controller.go:381] "Waiting for informer caches to sync"
	I1006 19:54:24.652484       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1006 19:54:24.653675       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1006 19:54:54.652987       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1006 19:54:54.653108       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1006 19:54:54.653002       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1006 19:54:54.654072       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1006 19:54:56.053043       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1006 19:54:56.053089       1 metrics.go:72] Registering metrics
	I1006 19:54:56.053164       1 controller.go:711] "Syncing nftables rules"
	I1006 19:55:04.652781       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1006 19:55:04.652858       1 main.go:301] handling current node
	I1006 19:55:14.659852       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1006 19:55:14.659885       1 main.go:301] handling current node
	
	
	==> kube-apiserver [730df4b08913c9671db866cfd407ba257e98404db9dacaf36ebb44dfbc61ab38] <==
	I1006 19:54:23.021383       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1006 19:54:23.022062       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1006 19:54:23.022193       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1006 19:54:23.036300       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1006 19:54:23.037070       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1006 19:54:23.037123       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1006 19:54:23.037136       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1006 19:54:23.037138       1 policy_source.go:240] refreshing policies
	I1006 19:54:23.037153       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1006 19:54:23.039142       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1006 19:54:23.054658       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1006 19:54:23.091499       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1006 19:54:23.091551       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1006 19:54:23.105091       1 cache.go:39] Caches are synced for autoregister controller
	I1006 19:54:23.598562       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1006 19:54:23.832064       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1006 19:54:23.841911       1 controller.go:667] quota admission added evaluator for: namespaces
	I1006 19:54:24.046339       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1006 19:54:24.188610       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1006 19:54:24.221338       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1006 19:54:24.406620       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.147.225"}
	I1006 19:54:24.469198       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.191.208"}
	I1006 19:54:26.522109       1 controller.go:667] quota admission added evaluator for: endpoints
	I1006 19:54:26.576615       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1006 19:54:26.623033       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [e3421553bee92bb445fc71dc28a9e73f44a2e2fdf3f7ebc32bcf8e628b6f1a02] <==
	I1006 19:54:26.135578       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1006 19:54:26.136085       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1006 19:54:26.136160       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1006 19:54:26.138426       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1006 19:54:26.142686       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1006 19:54:26.147002       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1006 19:54:26.152285       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1006 19:54:26.152387       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 19:54:26.154647       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1006 19:54:26.163796       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1006 19:54:26.164874       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1006 19:54:26.164944       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1006 19:54:26.165065       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 19:54:26.165074       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1006 19:54:26.165080       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1006 19:54:26.165371       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1006 19:54:26.165601       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1006 19:54:26.166235       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1006 19:54:26.166292       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1006 19:54:26.177571       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1006 19:54:26.180708       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1006 19:54:26.188936       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1006 19:54:26.189036       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1006 19:54:26.189126       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-314275"
	I1006 19:54:26.189174       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-proxy [45f69ddd3c69b01b9fe25ae44f01f57b7f556ed19c8d7d38d509adc8fa4ce0a4] <==
	I1006 19:54:24.588039       1 server_linux.go:53] "Using iptables proxy"
	I1006 19:54:24.670533       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1006 19:54:24.774361       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1006 19:54:24.774406       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1006 19:54:24.774496       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1006 19:54:24.825205       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 19:54:24.828050       1 server_linux.go:132] "Using iptables Proxier"
	I1006 19:54:24.843157       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1006 19:54:24.843561       1 server.go:527] "Version info" version="v1.34.1"
	I1006 19:54:24.843842       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:54:24.845131       1 config.go:200] "Starting service config controller"
	I1006 19:54:24.845216       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1006 19:54:24.845261       1 config.go:106] "Starting endpoint slice config controller"
	I1006 19:54:24.845292       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1006 19:54:24.845327       1 config.go:403] "Starting serviceCIDR config controller"
	I1006 19:54:24.845354       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1006 19:54:24.846045       1 config.go:309] "Starting node config controller"
	I1006 19:54:24.848651       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1006 19:54:24.848731       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1006 19:54:24.945386       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1006 19:54:24.945384       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1006 19:54:24.945422       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ffc6fa832b5c0989f18a443f577b66e36306a055f9f4dda54f1184a47f1c75de] <==
	I1006 19:54:20.504956       1 serving.go:386] Generated self-signed cert in-memory
	I1006 19:54:23.192060       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1006 19:54:23.192092       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:54:23.205736       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1006 19:54:23.205818       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1006 19:54:23.205835       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1006 19:54:23.205867       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1006 19:54:23.226924       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:54:23.239651       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:54:23.239260       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 19:54:23.239689       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 19:54:23.309983       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1006 19:54:23.343792       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 19:54:23.343855       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 06 19:54:26 no-preload-314275 kubelet[766]: I1006 19:54:26.940644     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/aa6e6ce9-aebb-4159-a51c-d41852eb0898-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-bhkxd\" (UID: \"aa6e6ce9-aebb-4159-a51c-d41852eb0898\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bhkxd"
	Oct 06 19:54:27 no-preload-314275 kubelet[766]: W1006 19:54:27.116164     766 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3b7c30b4fccfef647cdb0e7ecc61769f9844c54f8180f8640cdce7f83fc9c9ab/crio-700ce2ed2056c33cab2ba4e17dd176dcd5eb03afe33e4e55c622d0d4fe0275f3 WatchSource:0}: Error finding container 700ce2ed2056c33cab2ba4e17dd176dcd5eb03afe33e4e55c622d0d4fe0275f3: Status 404 returned error can't find the container with id 700ce2ed2056c33cab2ba4e17dd176dcd5eb03afe33e4e55c622d0d4fe0275f3
	Oct 06 19:54:27 no-preload-314275 kubelet[766]: W1006 19:54:27.126727     766 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3b7c30b4fccfef647cdb0e7ecc61769f9844c54f8180f8640cdce7f83fc9c9ab/crio-e9dcfec14e580fa37b62d147235e3a9fad19f7a839be6ffbaad4701e320fc593 WatchSource:0}: Error finding container e9dcfec14e580fa37b62d147235e3a9fad19f7a839be6ffbaad4701e320fc593: Status 404 returned error can't find the container with id e9dcfec14e580fa37b62d147235e3a9fad19f7a839be6ffbaad4701e320fc593
	Oct 06 19:54:29 no-preload-314275 kubelet[766]: I1006 19:54:29.024199     766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 06 19:54:32 no-preload-314275 kubelet[766]: I1006 19:54:32.015231     766 scope.go:117] "RemoveContainer" containerID="16274262323e3386bf53955850fc0bd2b48456ca122166dfec8b6ce43364f95a"
	Oct 06 19:54:33 no-preload-314275 kubelet[766]: I1006 19:54:33.018821     766 scope.go:117] "RemoveContainer" containerID="16274262323e3386bf53955850fc0bd2b48456ca122166dfec8b6ce43364f95a"
	Oct 06 19:54:33 no-preload-314275 kubelet[766]: I1006 19:54:33.019192     766 scope.go:117] "RemoveContainer" containerID="897b3c592763922f51e741170f3870edc47697d0d8c33848e9c48b9e972fbf4d"
	Oct 06 19:54:33 no-preload-314275 kubelet[766]: E1006 19:54:33.019373     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bhkxd_kubernetes-dashboard(aa6e6ce9-aebb-4159-a51c-d41852eb0898)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bhkxd" podUID="aa6e6ce9-aebb-4159-a51c-d41852eb0898"
	Oct 06 19:54:34 no-preload-314275 kubelet[766]: I1006 19:54:34.023921     766 scope.go:117] "RemoveContainer" containerID="897b3c592763922f51e741170f3870edc47697d0d8c33848e9c48b9e972fbf4d"
	Oct 06 19:54:34 no-preload-314275 kubelet[766]: E1006 19:54:34.024063     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bhkxd_kubernetes-dashboard(aa6e6ce9-aebb-4159-a51c-d41852eb0898)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bhkxd" podUID="aa6e6ce9-aebb-4159-a51c-d41852eb0898"
	Oct 06 19:54:36 no-preload-314275 kubelet[766]: I1006 19:54:36.048895     766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-k8dzl" podStartSLOduration=1.288508078 podStartE2EDuration="10.048870426s" podCreationTimestamp="2025-10-06 19:54:26 +0000 UTC" firstStartedPulling="2025-10-06 19:54:27.130278458 +0000 UTC m=+9.517025313" lastFinishedPulling="2025-10-06 19:54:35.890640806 +0000 UTC m=+18.277387661" observedRunningTime="2025-10-06 19:54:36.047479607 +0000 UTC m=+18.434226463" watchObservedRunningTime="2025-10-06 19:54:36.048870426 +0000 UTC m=+18.435617281"
	Oct 06 19:54:41 no-preload-314275 kubelet[766]: I1006 19:54:41.556460     766 scope.go:117] "RemoveContainer" containerID="897b3c592763922f51e741170f3870edc47697d0d8c33848e9c48b9e972fbf4d"
	Oct 06 19:54:41 no-preload-314275 kubelet[766]: E1006 19:54:41.556662     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bhkxd_kubernetes-dashboard(aa6e6ce9-aebb-4159-a51c-d41852eb0898)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bhkxd" podUID="aa6e6ce9-aebb-4159-a51c-d41852eb0898"
	Oct 06 19:54:53 no-preload-314275 kubelet[766]: I1006 19:54:53.828194     766 scope.go:117] "RemoveContainer" containerID="897b3c592763922f51e741170f3870edc47697d0d8c33848e9c48b9e972fbf4d"
	Oct 06 19:54:54 no-preload-314275 kubelet[766]: I1006 19:54:54.086527     766 scope.go:117] "RemoveContainer" containerID="897b3c592763922f51e741170f3870edc47697d0d8c33848e9c48b9e972fbf4d"
	Oct 06 19:54:54 no-preload-314275 kubelet[766]: I1006 19:54:54.086719     766 scope.go:117] "RemoveContainer" containerID="231abe139bb7ae75ba79b083238e5ef05c24806414c17ad3f26f0fc8b871d4dc"
	Oct 06 19:54:54 no-preload-314275 kubelet[766]: E1006 19:54:54.086884     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bhkxd_kubernetes-dashboard(aa6e6ce9-aebb-4159-a51c-d41852eb0898)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bhkxd" podUID="aa6e6ce9-aebb-4159-a51c-d41852eb0898"
	Oct 06 19:54:55 no-preload-314275 kubelet[766]: I1006 19:54:55.092917     766 scope.go:117] "RemoveContainer" containerID="27525ce70f1fdc3814390274d664d5a08475d03bc6ea2fe6c3c6391cd6fe9260"
	Oct 06 19:55:01 no-preload-314275 kubelet[766]: I1006 19:55:01.556532     766 scope.go:117] "RemoveContainer" containerID="231abe139bb7ae75ba79b083238e5ef05c24806414c17ad3f26f0fc8b871d4dc"
	Oct 06 19:55:01 no-preload-314275 kubelet[766]: E1006 19:55:01.556726     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bhkxd_kubernetes-dashboard(aa6e6ce9-aebb-4159-a51c-d41852eb0898)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bhkxd" podUID="aa6e6ce9-aebb-4159-a51c-d41852eb0898"
	Oct 06 19:55:12 no-preload-314275 kubelet[766]: I1006 19:55:12.827012     766 scope.go:117] "RemoveContainer" containerID="231abe139bb7ae75ba79b083238e5ef05c24806414c17ad3f26f0fc8b871d4dc"
	Oct 06 19:55:12 no-preload-314275 kubelet[766]: E1006 19:55:12.827192     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bhkxd_kubernetes-dashboard(aa6e6ce9-aebb-4159-a51c-d41852eb0898)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bhkxd" podUID="aa6e6ce9-aebb-4159-a51c-d41852eb0898"
	Oct 06 19:55:13 no-preload-314275 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 06 19:55:13 no-preload-314275 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 06 19:55:13 no-preload-314275 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [c02a55032bbff6f8848c6ea1b9ad39ce634e876f7d260d85021fe5ee875a7b0f] <==
	2025/10/06 19:54:35 Using namespace: kubernetes-dashboard
	2025/10/06 19:54:35 Using in-cluster config to connect to apiserver
	2025/10/06 19:54:35 Using secret token for csrf signing
	2025/10/06 19:54:35 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/06 19:54:35 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/06 19:54:35 Successful initial request to the apiserver, version: v1.34.1
	2025/10/06 19:54:35 Generating JWE encryption key
	2025/10/06 19:54:35 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/06 19:54:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/06 19:54:37 Initializing JWE encryption key from synchronized object
	2025/10/06 19:54:37 Creating in-cluster Sidecar client
	2025/10/06 19:54:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/06 19:54:37 Serving insecurely on HTTP port: 9090
	2025/10/06 19:55:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/06 19:54:35 Starting overwatch
	
	
	==> storage-provisioner [27525ce70f1fdc3814390274d664d5a08475d03bc6ea2fe6c3c6391cd6fe9260] <==
	I1006 19:54:24.585082       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1006 19:54:54.596573       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [47a18794a163924d8e7f1733f13eb816ab22d7b97358bc2c00b3efea43cb69cc] <==
	I1006 19:54:55.150084       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1006 19:54:55.182571       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1006 19:54:55.183449       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1006 19:54:55.186271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:54:58.641795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:55:02.902087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:55:06.500565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:55:09.553622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:55:12.575957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:55:12.593194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1006 19:55:12.593359       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1006 19:55:12.593562       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-314275_c91e5fde-de52-4f15-aecc-7f93914809f7!
	I1006 19:55:12.601831       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b6bc9f45-80c8-40cf-884b-42fa87f06e10", APIVersion:"v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-314275_c91e5fde-de52-4f15-aecc-7f93914809f7 became leader
	W1006 19:55:12.616619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:55:12.629491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1006 19:55:12.695208       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-314275_c91e5fde-de52-4f15-aecc-7f93914809f7!
	W1006 19:55:14.632960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:55:14.639738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:55:16.645241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:55:16.653177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-314275 -n no-preload-314275
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-314275 -n no-preload-314275: exit status 2 (423.268659ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-314275 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-314275
helpers_test.go:243: (dbg) docker inspect no-preload-314275:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3b7c30b4fccfef647cdb0e7ecc61769f9844c54f8180f8640cdce7f83fc9c9ab",
	        "Created": "2025-10-06T19:52:30.053793791Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 199619,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T19:54:11.150258567Z",
	            "FinishedAt": "2025-10-06T19:54:10.274945641Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/3b7c30b4fccfef647cdb0e7ecc61769f9844c54f8180f8640cdce7f83fc9c9ab/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3b7c30b4fccfef647cdb0e7ecc61769f9844c54f8180f8640cdce7f83fc9c9ab/hostname",
	        "HostsPath": "/var/lib/docker/containers/3b7c30b4fccfef647cdb0e7ecc61769f9844c54f8180f8640cdce7f83fc9c9ab/hosts",
	        "LogPath": "/var/lib/docker/containers/3b7c30b4fccfef647cdb0e7ecc61769f9844c54f8180f8640cdce7f83fc9c9ab/3b7c30b4fccfef647cdb0e7ecc61769f9844c54f8180f8640cdce7f83fc9c9ab-json.log",
	        "Name": "/no-preload-314275",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-314275:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-314275",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3b7c30b4fccfef647cdb0e7ecc61769f9844c54f8180f8640cdce7f83fc9c9ab",
	                "LowerDir": "/var/lib/docker/overlay2/6baf105e7753dc1d2374b161686b0bc798240cf018581905f7dd4f21447d1a20-init/diff:/var/lib/docker/overlay2/950dad8a7486c25883a9845454dded3949e8f3a53166d005e6749ff210bcab80/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6baf105e7753dc1d2374b161686b0bc798240cf018581905f7dd4f21447d1a20/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6baf105e7753dc1d2374b161686b0bc798240cf018581905f7dd4f21447d1a20/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6baf105e7753dc1d2374b161686b0bc798240cf018581905f7dd4f21447d1a20/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-314275",
	                "Source": "/var/lib/docker/volumes/no-preload-314275/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-314275",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-314275",
	                "name.minikube.sigs.k8s.io": "no-preload-314275",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a9e94f2597555a477927c1e5aa4f70e73c451bf816ae06c83dddd3fa9af5c90b",
	            "SandboxKey": "/var/run/docker/netns/a9e94f259755",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-314275": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:90:b1:21:cf:4f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b693310dd981b3558dbfee81926e93addf9d9e76e4588123249599a4c1c5d16e",
	                    "EndpointID": "636c92ff5fa376eaa6498c6bb65fb7cb13930bad72b0e37032d7b86f71e16c63",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-314275",
	                        "3b7c30b4fccf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-314275 -n no-preload-314275
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-314275 -n no-preload-314275: exit status 2 (427.605837ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-314275 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-314275 logs -n 25: (1.613643799s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ cert-options-593131 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-593131    │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ ssh     │ -p cert-options-593131 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-593131    │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ delete  │ -p cert-options-593131                                                                                                                                                                                                                        │ cert-options-593131    │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:49 UTC │
	│ start   │ -p old-k8s-version-100545 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-100545 │ jenkins │ v1.37.0 │ 06 Oct 25 19:49 UTC │ 06 Oct 25 19:50 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-100545 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-100545 │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │                     │
	│ stop    │ -p old-k8s-version-100545 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-100545 │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │ 06 Oct 25 19:51 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-100545 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-100545 │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │ 06 Oct 25 19:51 UTC │
	│ start   │ -p old-k8s-version-100545 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-100545 │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │ 06 Oct 25 19:52 UTC │
	│ start   │ -p cert-expiration-585086 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-585086 │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │ 06 Oct 25 19:53 UTC │
	│ image   │ old-k8s-version-100545 image list --format=json                                                                                                                                                                                               │ old-k8s-version-100545 │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:52 UTC │
	│ pause   │ -p old-k8s-version-100545 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-100545 │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │                     │
	│ delete  │ -p old-k8s-version-100545                                                                                                                                                                                                                     │ old-k8s-version-100545 │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:52 UTC │
	│ delete  │ -p old-k8s-version-100545                                                                                                                                                                                                                     │ old-k8s-version-100545 │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:52 UTC │
	│ start   │ -p no-preload-314275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-314275      │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:53 UTC │
	│ delete  │ -p cert-expiration-585086                                                                                                                                                                                                                     │ cert-expiration-585086 │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │ 06 Oct 25 19:53 UTC │
	│ start   │ -p embed-certs-830393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-830393     │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │ 06 Oct 25 19:54 UTC │
	│ addons  │ enable metrics-server -p no-preload-314275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-314275      │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │                     │
	│ stop    │ -p no-preload-314275 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-314275      │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │ 06 Oct 25 19:54 UTC │
	│ addons  │ enable dashboard -p no-preload-314275 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-314275      │ jenkins │ v1.37.0 │ 06 Oct 25 19:54 UTC │ 06 Oct 25 19:54 UTC │
	│ start   │ -p no-preload-314275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-314275      │ jenkins │ v1.37.0 │ 06 Oct 25 19:54 UTC │ 06 Oct 25 19:55 UTC │
	│ addons  │ enable metrics-server -p embed-certs-830393 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-830393     │ jenkins │ v1.37.0 │ 06 Oct 25 19:54 UTC │                     │
	│ stop    │ -p embed-certs-830393 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-830393     │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ addons  │ enable dashboard -p embed-certs-830393 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-830393     │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ pause   │ -p no-preload-314275 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-314275      │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │                     │
	│ start   │ -p embed-certs-830393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-830393     │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 19:55:12
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 19:55:12.724179  202624 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:55:12.724356  202624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:55:12.724368  202624 out.go:374] Setting ErrFile to fd 2...
	I1006 19:55:12.724373  202624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:55:12.724663  202624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:55:12.725084  202624 out.go:368] Setting JSON to false
	I1006 19:55:12.726722  202624 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5848,"bootTime":1759774665,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1006 19:55:12.726802  202624 start.go:140] virtualization:  
	I1006 19:55:12.728576  202624 out.go:179] * [embed-certs-830393] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 19:55:12.729994  202624 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 19:55:12.730143  202624 notify.go:220] Checking for updates...
	I1006 19:55:12.733383  202624 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 19:55:12.735319  202624 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:55:12.736619  202624 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	I1006 19:55:12.737559  202624 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 19:55:12.738918  202624 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 19:55:12.740596  202624 config.go:182] Loaded profile config "embed-certs-830393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:55:12.741188  202624 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 19:55:12.766378  202624 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 19:55:12.766513  202624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:55:12.901861  202624 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:55:12.881148555 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:55:12.901961  202624 docker.go:318] overlay module found
	I1006 19:55:12.903503  202624 out.go:179] * Using the docker driver based on existing profile
	I1006 19:55:12.905602  202624 start.go:304] selected driver: docker
	I1006 19:55:12.905624  202624 start.go:924] validating driver "docker" against &{Name:embed-certs-830393 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-830393 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:55:12.905725  202624 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 19:55:12.906473  202624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:55:12.996814  202624 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:55:12.986744934 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:55:12.997148  202624 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 19:55:12.997176  202624 cni.go:84] Creating CNI manager for ""
	I1006 19:55:12.997233  202624 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:55:12.997271  202624 start.go:348] cluster config:
	{Name:embed-certs-830393 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-830393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:55:12.999028  202624 out.go:179] * Starting "embed-certs-830393" primary control-plane node in "embed-certs-830393" cluster
	I1006 19:55:13.000150  202624 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 19:55:13.001241  202624 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 19:55:13.002592  202624 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:55:13.002640  202624 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1006 19:55:13.002652  202624 cache.go:58] Caching tarball of preloaded images
	I1006 19:55:13.002753  202624 preload.go:233] Found /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1006 19:55:13.002767  202624 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 19:55:13.002877  202624 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/config.json ...
	I1006 19:55:13.003096  202624 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 19:55:13.028432  202624 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 19:55:13.028455  202624 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 19:55:13.028475  202624 cache.go:232] Successfully downloaded all kic artifacts
	I1006 19:55:13.028498  202624 start.go:360] acquireMachinesLock for embed-certs-830393: {Name:mk9482698940ed15367c12951e7ada37afdeab68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 19:55:13.028567  202624 start.go:364] duration metric: took 51.8µs to acquireMachinesLock for "embed-certs-830393"
	I1006 19:55:13.028586  202624 start.go:96] Skipping create...Using existing machine configuration
	I1006 19:55:13.028597  202624 fix.go:54] fixHost starting: 
	I1006 19:55:13.028853  202624 cli_runner.go:164] Run: docker container inspect embed-certs-830393 --format={{.State.Status}}
	I1006 19:55:13.049508  202624 fix.go:112] recreateIfNeeded on embed-certs-830393: state=Stopped err=<nil>
	W1006 19:55:13.049536  202624 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 19:55:13.051412  202624 out.go:252] * Restarting existing docker container for "embed-certs-830393" ...
	I1006 19:55:13.051502  202624 cli_runner.go:164] Run: docker start embed-certs-830393
	I1006 19:55:13.350619  202624 cli_runner.go:164] Run: docker container inspect embed-certs-830393 --format={{.State.Status}}
	I1006 19:55:13.371010  202624 kic.go:430] container "embed-certs-830393" state is running.
	I1006 19:55:13.371501  202624 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-830393
	I1006 19:55:13.401971  202624 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/embed-certs-830393/config.json ...
	I1006 19:55:13.402209  202624 machine.go:93] provisionDockerMachine start ...
	I1006 19:55:13.402280  202624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-830393
	I1006 19:55:13.431587  202624 main.go:141] libmachine: Using SSH client type: native
	I1006 19:55:13.431993  202624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33075 <nil> <nil>}
	I1006 19:55:13.432006  202624 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 19:55:13.432871  202624 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1006 19:55:16.591581  202624 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-830393
	
	I1006 19:55:16.591605  202624 ubuntu.go:182] provisioning hostname "embed-certs-830393"
	I1006 19:55:16.591675  202624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-830393
	I1006 19:55:16.612310  202624 main.go:141] libmachine: Using SSH client type: native
	I1006 19:55:16.612625  202624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33075 <nil> <nil>}
	I1006 19:55:16.612637  202624 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-830393 && echo "embed-certs-830393" | sudo tee /etc/hostname
	I1006 19:55:16.779892  202624 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-830393
	
	I1006 19:55:16.780007  202624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-830393
	I1006 19:55:16.804333  202624 main.go:141] libmachine: Using SSH client type: native
	I1006 19:55:16.804665  202624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33075 <nil> <nil>}
	I1006 19:55:16.804686  202624 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-830393' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-830393/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-830393' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 19:55:16.951352  202624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 19:55:16.951373  202624 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-2540/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-2540/.minikube}
	I1006 19:55:16.951404  202624 ubuntu.go:190] setting up certificates
	I1006 19:55:16.951413  202624 provision.go:84] configureAuth start
	I1006 19:55:16.951471  202624 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-830393
	I1006 19:55:16.976363  202624 provision.go:143] copyHostCerts
	I1006 19:55:16.976432  202624 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem, removing ...
	I1006 19:55:16.976442  202624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem
	I1006 19:55:16.976533  202624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem (1082 bytes)
	I1006 19:55:16.976623  202624 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem, removing ...
	I1006 19:55:16.976628  202624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem
	I1006 19:55:16.976654  202624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem (1123 bytes)
	I1006 19:55:16.976708  202624 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem, removing ...
	I1006 19:55:16.976713  202624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem
	I1006 19:55:16.976735  202624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem (1675 bytes)
	I1006 19:55:16.976781  202624 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem org=jenkins.embed-certs-830393 san=[127.0.0.1 192.168.85.2 embed-certs-830393 localhost minikube]
	I1006 19:55:17.257766  202624 provision.go:177] copyRemoteCerts
	I1006 19:55:17.257836  202624 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 19:55:17.257884  202624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-830393
	I1006 19:55:17.278405  202624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33075 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/embed-certs-830393/id_rsa Username:docker}
	I1006 19:55:17.385041  202624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 19:55:17.417444  202624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1006 19:55:17.444642  202624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 19:55:17.471299  202624 provision.go:87] duration metric: took 519.871779ms to configureAuth
	I1006 19:55:17.471322  202624 ubuntu.go:206] setting minikube options for container-runtime
	I1006 19:55:17.471514  202624 config.go:182] Loaded profile config "embed-certs-830393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:55:17.471617  202624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-830393
	I1006 19:55:17.501488  202624 main.go:141] libmachine: Using SSH client type: native
	I1006 19:55:17.501784  202624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33075 <nil> <nil>}
	I1006 19:55:17.501797  202624 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	
	
	==> CRI-O <==
	Oct 06 19:54:55 no-preload-314275 crio[651]: time="2025-10-06T19:54:55.097616386Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6c95daab-098b-41fe-9dd1-0c0b152b3e13 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:54:55 no-preload-314275 crio[651]: time="2025-10-06T19:54:55.099081463Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=e58d96bb-a821-4d81-b88d-4e1d3486378e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:54:55 no-preload-314275 crio[651]: time="2025-10-06T19:54:55.09952248Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:54:55 no-preload-314275 crio[651]: time="2025-10-06T19:54:55.105370584Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:54:55 no-preload-314275 crio[651]: time="2025-10-06T19:54:55.105769959Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/09f811b89c6dba438a7238678f3d5a1af4dc025908036f9d11340bdc86536a50/merged/etc/passwd: no such file or directory"
	Oct 06 19:54:55 no-preload-314275 crio[651]: time="2025-10-06T19:54:55.10588791Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/09f811b89c6dba438a7238678f3d5a1af4dc025908036f9d11340bdc86536a50/merged/etc/group: no such file or directory"
	Oct 06 19:54:55 no-preload-314275 crio[651]: time="2025-10-06T19:54:55.106252487Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:54:55 no-preload-314275 crio[651]: time="2025-10-06T19:54:55.126663698Z" level=info msg="Created container 47a18794a163924d8e7f1733f13eb816ab22d7b97358bc2c00b3efea43cb69cc: kube-system/storage-provisioner/storage-provisioner" id=e58d96bb-a821-4d81-b88d-4e1d3486378e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:54:55 no-preload-314275 crio[651]: time="2025-10-06T19:54:55.128095461Z" level=info msg="Starting container: 47a18794a163924d8e7f1733f13eb816ab22d7b97358bc2c00b3efea43cb69cc" id=e1e32247-4914-45b7-b78d-a413fe1bed5f name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 19:54:55 no-preload-314275 crio[651]: time="2025-10-06T19:54:55.130169492Z" level=info msg="Started container" PID=1634 containerID=47a18794a163924d8e7f1733f13eb816ab22d7b97358bc2c00b3efea43cb69cc description=kube-system/storage-provisioner/storage-provisioner id=e1e32247-4914-45b7-b78d-a413fe1bed5f name=/runtime.v1.RuntimeService/StartContainer sandboxID=1c29fbaf5b58790aa2de37bbe6345e59ce5747651f13725803ab3d40648e9884
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.653098765Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.657181374Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.657216304Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.657247016Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.661153416Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.661189888Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.6612153Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.664449861Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.66455602Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.664596431Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.668153574Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.668191228Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.668217575Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.671489822Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:55:04 no-preload-314275 crio[651]: time="2025-10-06T19:55:04.671528272Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	47a18794a1639       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           23 seconds ago       Running             storage-provisioner         2                   1c29fbaf5b587       storage-provisioner                          kube-system
	231abe139bb7a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago       Exited              dashboard-metrics-scraper   2                   700ce2ed2056c       dashboard-metrics-scraper-6ffb444bf9-bhkxd   kubernetes-dashboard
	c02a55032bbff       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago       Running             kubernetes-dashboard        0                   e9dcfec14e580       kubernetes-dashboard-855c9754f9-k8dzl        kubernetes-dashboard
	11ac9c2aa86de       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   14138af1a07e3       busybox                                      default
	d345bd4bc027c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           54 seconds ago       Running             coredns                     1                   61d13e5c8fab6       coredns-66bc5c9577-tccns                     kube-system
	51aa041475517       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   74238da692c1b       kindnet-b6hb7                                kube-system
	45f69ddd3c69b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           54 seconds ago       Running             kube-proxy                  1                   e56364414d284       kube-proxy-nr6pc                             kube-system
	27525ce70f1fd       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           54 seconds ago       Exited              storage-provisioner         1                   1c29fbaf5b587       storage-provisioner                          kube-system
	e3421553bee92       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   1ac31b905990b       kube-controller-manager-no-preload-314275    kube-system
	730df4b08913c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   488bf9a14738f       kube-apiserver-no-preload-314275             kube-system
	186d0a80c9234       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   d6155e6203aa3       etcd-no-preload-314275                       kube-system
	ffc6fa832b5c0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   e992eb2df8c41       kube-scheduler-no-preload-314275             kube-system
	
	
	==> coredns [d345bd4bc027cf1668362282ac58b7897b3db88e8a7ebbc3d85ef30fb4ffcf74] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59136 - 15011 "HINFO IN 2513846065025795589.4192107700506759617. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.005538461s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-314275
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-314275
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81
	                    minikube.k8s.io/name=no-preload-314275
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_06T19_53_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 06 Oct 2025 19:53:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-314275
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 06 Oct 2025 19:55:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 06 Oct 2025 19:54:53 +0000   Mon, 06 Oct 2025 19:53:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 06 Oct 2025 19:54:53 +0000   Mon, 06 Oct 2025 19:53:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 06 Oct 2025 19:54:53 +0000   Mon, 06 Oct 2025 19:53:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 06 Oct 2025 19:54:53 +0000   Mon, 06 Oct 2025 19:53:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-314275
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 217e20117b5041af99443eed96fc31f8
	  System UUID:                063eafb6-36b6-4179-b2e4-ad5bbf368dcb
	  Boot ID:                    28123a58-a718-41b5-bac7-da83ff2d3a0d
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-tccns                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     113s
	  kube-system                 etcd-no-preload-314275                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m
	  kube-system                 kindnet-b6hb7                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-no-preload-314275              250m (12%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-no-preload-314275     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-nr6pc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-no-preload-314275              100m (5%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-bhkxd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-k8dzl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 110s                   kube-proxy       
	  Normal   Starting                 54s                    kube-proxy       
	  Warning  CgroupV1                 2m11s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m11s (x8 over 2m11s)  kubelet          Node no-preload-314275 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m11s (x8 over 2m11s)  kubelet          Node no-preload-314275 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m11s (x8 over 2m11s)  kubelet          Node no-preload-314275 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  118s                   kubelet          Node no-preload-314275 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 118s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    118s                   kubelet          Node no-preload-314275 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     118s                   kubelet          Node no-preload-314275 status is now: NodeHasSufficientPID
	  Normal   Starting                 118s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           114s                   node-controller  Node no-preload-314275 event: Registered Node no-preload-314275 in Controller
	  Normal   NodeReady                97s                    kubelet          Node no-preload-314275 status is now: NodeReady
	  Normal   Starting                 62s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 62s)      kubelet          Node no-preload-314275 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 62s)      kubelet          Node no-preload-314275 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 62s)      kubelet          Node no-preload-314275 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                    node-controller  Node no-preload-314275 event: Registered Node no-preload-314275 in Controller
	
	
	==> dmesg <==
	[Oct 6 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:23] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:26] overlayfs: idmapped layers are currently not supported
	[ +26.009516] overlayfs: idmapped layers are currently not supported
	[  +1.884868] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:28] overlayfs: idmapped layers are currently not supported
	[ +38.864395] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:29] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:30] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:31] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:35] overlayfs: idmapped layers are currently not supported
	[ +30.685217] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:37] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:39] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:41] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:48] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:49] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:50] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:52] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:54] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [186d0a80c923460b51dd7d0acf3cdfa8b7d291a0ad41f9d28777566a35ac63c5] <==
	{"level":"warn","ts":"2025-10-06T19:54:21.203213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.222187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.242836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.263153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.275580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.290016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.327421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.332302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.349949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.380496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.400098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.418934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.436633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.456950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.473154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.505665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.520979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.543465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.564289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.588072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.607215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.638214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.691820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.719089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:54:21.801684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42216","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:55:19 up  1:37,  0 user,  load average: 2.10, 2.12, 1.83
	Linux no-preload-314275 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [51aa04147551728423412f83b162475268c3ea26736624b38098bb9292184c3a] <==
	I1006 19:54:24.459202       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1006 19:54:24.459545       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1006 19:54:24.459651       1 main.go:148] setting mtu 1500 for CNI 
	I1006 19:54:24.459662       1 main.go:178] kindnetd IP family: "ipv4"
	I1006 19:54:24.459674       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-06T19:54:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1006 19:54:24.652378       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1006 19:54:24.652452       1 controller.go:381] "Waiting for informer caches to sync"
	I1006 19:54:24.652484       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1006 19:54:24.653675       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1006 19:54:54.652987       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1006 19:54:54.653108       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1006 19:54:54.653002       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1006 19:54:54.654072       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1006 19:54:56.053043       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1006 19:54:56.053089       1 metrics.go:72] Registering metrics
	I1006 19:54:56.053164       1 controller.go:711] "Syncing nftables rules"
	I1006 19:55:04.652781       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1006 19:55:04.652858       1 main.go:301] handling current node
	I1006 19:55:14.659852       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1006 19:55:14.659885       1 main.go:301] handling current node
	
	
	==> kube-apiserver [730df4b08913c9671db866cfd407ba257e98404db9dacaf36ebb44dfbc61ab38] <==
	I1006 19:54:23.021383       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1006 19:54:23.022062       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1006 19:54:23.022193       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1006 19:54:23.036300       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1006 19:54:23.037070       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1006 19:54:23.037123       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1006 19:54:23.037136       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1006 19:54:23.037138       1 policy_source.go:240] refreshing policies
	I1006 19:54:23.037153       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1006 19:54:23.039142       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1006 19:54:23.054658       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1006 19:54:23.091499       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1006 19:54:23.091551       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1006 19:54:23.105091       1 cache.go:39] Caches are synced for autoregister controller
	I1006 19:54:23.598562       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1006 19:54:23.832064       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1006 19:54:23.841911       1 controller.go:667] quota admission added evaluator for: namespaces
	I1006 19:54:24.046339       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1006 19:54:24.188610       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1006 19:54:24.221338       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1006 19:54:24.406620       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.147.225"}
	I1006 19:54:24.469198       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.191.208"}
	I1006 19:54:26.522109       1 controller.go:667] quota admission added evaluator for: endpoints
	I1006 19:54:26.576615       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1006 19:54:26.623033       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [e3421553bee92bb445fc71dc28a9e73f44a2e2fdf3f7ebc32bcf8e628b6f1a02] <==
	I1006 19:54:26.135578       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1006 19:54:26.136085       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1006 19:54:26.136160       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1006 19:54:26.138426       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1006 19:54:26.142686       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1006 19:54:26.147002       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1006 19:54:26.152285       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1006 19:54:26.152387       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 19:54:26.154647       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1006 19:54:26.163796       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1006 19:54:26.164874       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1006 19:54:26.164944       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1006 19:54:26.165065       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 19:54:26.165074       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1006 19:54:26.165080       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1006 19:54:26.165371       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1006 19:54:26.165601       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1006 19:54:26.166235       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1006 19:54:26.166292       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1006 19:54:26.177571       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1006 19:54:26.180708       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1006 19:54:26.188936       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1006 19:54:26.189036       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1006 19:54:26.189126       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-314275"
	I1006 19:54:26.189174       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-proxy [45f69ddd3c69b01b9fe25ae44f01f57b7f556ed19c8d7d38d509adc8fa4ce0a4] <==
	I1006 19:54:24.588039       1 server_linux.go:53] "Using iptables proxy"
	I1006 19:54:24.670533       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1006 19:54:24.774361       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1006 19:54:24.774406       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1006 19:54:24.774496       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1006 19:54:24.825205       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 19:54:24.828050       1 server_linux.go:132] "Using iptables Proxier"
	I1006 19:54:24.843157       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1006 19:54:24.843561       1 server.go:527] "Version info" version="v1.34.1"
	I1006 19:54:24.843842       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:54:24.845131       1 config.go:200] "Starting service config controller"
	I1006 19:54:24.845216       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1006 19:54:24.845261       1 config.go:106] "Starting endpoint slice config controller"
	I1006 19:54:24.845292       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1006 19:54:24.845327       1 config.go:403] "Starting serviceCIDR config controller"
	I1006 19:54:24.845354       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1006 19:54:24.846045       1 config.go:309] "Starting node config controller"
	I1006 19:54:24.848651       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1006 19:54:24.848731       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1006 19:54:24.945386       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1006 19:54:24.945384       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1006 19:54:24.945422       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ffc6fa832b5c0989f18a443f577b66e36306a055f9f4dda54f1184a47f1c75de] <==
	I1006 19:54:20.504956       1 serving.go:386] Generated self-signed cert in-memory
	I1006 19:54:23.192060       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1006 19:54:23.192092       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:54:23.205736       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1006 19:54:23.205818       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1006 19:54:23.205835       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1006 19:54:23.205867       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1006 19:54:23.226924       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:54:23.239651       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:54:23.239260       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 19:54:23.239689       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 19:54:23.309983       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1006 19:54:23.343792       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 19:54:23.343855       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 06 19:54:26 no-preload-314275 kubelet[766]: I1006 19:54:26.940644     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/aa6e6ce9-aebb-4159-a51c-d41852eb0898-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-bhkxd\" (UID: \"aa6e6ce9-aebb-4159-a51c-d41852eb0898\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bhkxd"
	Oct 06 19:54:27 no-preload-314275 kubelet[766]: W1006 19:54:27.116164     766 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3b7c30b4fccfef647cdb0e7ecc61769f9844c54f8180f8640cdce7f83fc9c9ab/crio-700ce2ed2056c33cab2ba4e17dd176dcd5eb03afe33e4e55c622d0d4fe0275f3 WatchSource:0}: Error finding container 700ce2ed2056c33cab2ba4e17dd176dcd5eb03afe33e4e55c622d0d4fe0275f3: Status 404 returned error can't find the container with id 700ce2ed2056c33cab2ba4e17dd176dcd5eb03afe33e4e55c622d0d4fe0275f3
	Oct 06 19:54:27 no-preload-314275 kubelet[766]: W1006 19:54:27.126727     766 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3b7c30b4fccfef647cdb0e7ecc61769f9844c54f8180f8640cdce7f83fc9c9ab/crio-e9dcfec14e580fa37b62d147235e3a9fad19f7a839be6ffbaad4701e320fc593 WatchSource:0}: Error finding container e9dcfec14e580fa37b62d147235e3a9fad19f7a839be6ffbaad4701e320fc593: Status 404 returned error can't find the container with id e9dcfec14e580fa37b62d147235e3a9fad19f7a839be6ffbaad4701e320fc593
	Oct 06 19:54:29 no-preload-314275 kubelet[766]: I1006 19:54:29.024199     766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 06 19:54:32 no-preload-314275 kubelet[766]: I1006 19:54:32.015231     766 scope.go:117] "RemoveContainer" containerID="16274262323e3386bf53955850fc0bd2b48456ca122166dfec8b6ce43364f95a"
	Oct 06 19:54:33 no-preload-314275 kubelet[766]: I1006 19:54:33.018821     766 scope.go:117] "RemoveContainer" containerID="16274262323e3386bf53955850fc0bd2b48456ca122166dfec8b6ce43364f95a"
	Oct 06 19:54:33 no-preload-314275 kubelet[766]: I1006 19:54:33.019192     766 scope.go:117] "RemoveContainer" containerID="897b3c592763922f51e741170f3870edc47697d0d8c33848e9c48b9e972fbf4d"
	Oct 06 19:54:33 no-preload-314275 kubelet[766]: E1006 19:54:33.019373     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bhkxd_kubernetes-dashboard(aa6e6ce9-aebb-4159-a51c-d41852eb0898)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bhkxd" podUID="aa6e6ce9-aebb-4159-a51c-d41852eb0898"
	Oct 06 19:54:34 no-preload-314275 kubelet[766]: I1006 19:54:34.023921     766 scope.go:117] "RemoveContainer" containerID="897b3c592763922f51e741170f3870edc47697d0d8c33848e9c48b9e972fbf4d"
	Oct 06 19:54:34 no-preload-314275 kubelet[766]: E1006 19:54:34.024063     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bhkxd_kubernetes-dashboard(aa6e6ce9-aebb-4159-a51c-d41852eb0898)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bhkxd" podUID="aa6e6ce9-aebb-4159-a51c-d41852eb0898"
	Oct 06 19:54:36 no-preload-314275 kubelet[766]: I1006 19:54:36.048895     766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-k8dzl" podStartSLOduration=1.288508078 podStartE2EDuration="10.048870426s" podCreationTimestamp="2025-10-06 19:54:26 +0000 UTC" firstStartedPulling="2025-10-06 19:54:27.130278458 +0000 UTC m=+9.517025313" lastFinishedPulling="2025-10-06 19:54:35.890640806 +0000 UTC m=+18.277387661" observedRunningTime="2025-10-06 19:54:36.047479607 +0000 UTC m=+18.434226463" watchObservedRunningTime="2025-10-06 19:54:36.048870426 +0000 UTC m=+18.435617281"
	Oct 06 19:54:41 no-preload-314275 kubelet[766]: I1006 19:54:41.556460     766 scope.go:117] "RemoveContainer" containerID="897b3c592763922f51e741170f3870edc47697d0d8c33848e9c48b9e972fbf4d"
	Oct 06 19:54:41 no-preload-314275 kubelet[766]: E1006 19:54:41.556662     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bhkxd_kubernetes-dashboard(aa6e6ce9-aebb-4159-a51c-d41852eb0898)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bhkxd" podUID="aa6e6ce9-aebb-4159-a51c-d41852eb0898"
	Oct 06 19:54:53 no-preload-314275 kubelet[766]: I1006 19:54:53.828194     766 scope.go:117] "RemoveContainer" containerID="897b3c592763922f51e741170f3870edc47697d0d8c33848e9c48b9e972fbf4d"
	Oct 06 19:54:54 no-preload-314275 kubelet[766]: I1006 19:54:54.086527     766 scope.go:117] "RemoveContainer" containerID="897b3c592763922f51e741170f3870edc47697d0d8c33848e9c48b9e972fbf4d"
	Oct 06 19:54:54 no-preload-314275 kubelet[766]: I1006 19:54:54.086719     766 scope.go:117] "RemoveContainer" containerID="231abe139bb7ae75ba79b083238e5ef05c24806414c17ad3f26f0fc8b871d4dc"
	Oct 06 19:54:54 no-preload-314275 kubelet[766]: E1006 19:54:54.086884     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bhkxd_kubernetes-dashboard(aa6e6ce9-aebb-4159-a51c-d41852eb0898)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bhkxd" podUID="aa6e6ce9-aebb-4159-a51c-d41852eb0898"
	Oct 06 19:54:55 no-preload-314275 kubelet[766]: I1006 19:54:55.092917     766 scope.go:117] "RemoveContainer" containerID="27525ce70f1fdc3814390274d664d5a08475d03bc6ea2fe6c3c6391cd6fe9260"
	Oct 06 19:55:01 no-preload-314275 kubelet[766]: I1006 19:55:01.556532     766 scope.go:117] "RemoveContainer" containerID="231abe139bb7ae75ba79b083238e5ef05c24806414c17ad3f26f0fc8b871d4dc"
	Oct 06 19:55:01 no-preload-314275 kubelet[766]: E1006 19:55:01.556726     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bhkxd_kubernetes-dashboard(aa6e6ce9-aebb-4159-a51c-d41852eb0898)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bhkxd" podUID="aa6e6ce9-aebb-4159-a51c-d41852eb0898"
	Oct 06 19:55:12 no-preload-314275 kubelet[766]: I1006 19:55:12.827012     766 scope.go:117] "RemoveContainer" containerID="231abe139bb7ae75ba79b083238e5ef05c24806414c17ad3f26f0fc8b871d4dc"
	Oct 06 19:55:12 no-preload-314275 kubelet[766]: E1006 19:55:12.827192     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-bhkxd_kubernetes-dashboard(aa6e6ce9-aebb-4159-a51c-d41852eb0898)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bhkxd" podUID="aa6e6ce9-aebb-4159-a51c-d41852eb0898"
	Oct 06 19:55:13 no-preload-314275 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 06 19:55:13 no-preload-314275 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 06 19:55:13 no-preload-314275 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [c02a55032bbff6f8848c6ea1b9ad39ce634e876f7d260d85021fe5ee875a7b0f] <==
	2025/10/06 19:54:35 Using namespace: kubernetes-dashboard
	2025/10/06 19:54:35 Using in-cluster config to connect to apiserver
	2025/10/06 19:54:35 Using secret token for csrf signing
	2025/10/06 19:54:35 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/06 19:54:35 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/06 19:54:35 Successful initial request to the apiserver, version: v1.34.1
	2025/10/06 19:54:35 Generating JWE encryption key
	2025/10/06 19:54:35 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/06 19:54:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/06 19:54:37 Initializing JWE encryption key from synchronized object
	2025/10/06 19:54:37 Creating in-cluster Sidecar client
	2025/10/06 19:54:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/06 19:54:37 Serving insecurely on HTTP port: 9090
	2025/10/06 19:55:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/06 19:54:35 Starting overwatch
	
	
	==> storage-provisioner [27525ce70f1fdc3814390274d664d5a08475d03bc6ea2fe6c3c6391cd6fe9260] <==
	I1006 19:54:24.585082       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1006 19:54:54.596573       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [47a18794a163924d8e7f1733f13eb816ab22d7b97358bc2c00b3efea43cb69cc] <==
	I1006 19:54:55.150084       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1006 19:54:55.182571       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1006 19:54:55.183449       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1006 19:54:55.186271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:54:58.641795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:55:02.902087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:55:06.500565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:55:09.553622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:55:12.575957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:55:12.593194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1006 19:55:12.593359       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1006 19:55:12.593562       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-314275_c91e5fde-de52-4f15-aecc-7f93914809f7!
	I1006 19:55:12.601831       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b6bc9f45-80c8-40cf-884b-42fa87f06e10", APIVersion:"v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-314275_c91e5fde-de52-4f15-aecc-7f93914809f7 became leader
	W1006 19:55:12.616619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:55:12.629491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1006 19:55:12.695208       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-314275_c91e5fde-de52-4f15-aecc-7f93914809f7!
	W1006 19:55:14.632960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:55:14.639738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:55:16.645241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:55:16.653177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:55:18.658467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:55:18.674851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-314275 -n no-preload-314275
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-314275 -n no-preload-314275: exit status 2 (620.816983ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-314275 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-830393 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-830393 --alsologtostderr -v=1: exit status 80 (2.452716084s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-830393 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 19:56:17.567827  208670 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:56:17.567965  208670 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:56:17.567976  208670 out.go:374] Setting ErrFile to fd 2...
	I1006 19:56:17.567981  208670 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:56:17.568252  208670 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:56:17.568560  208670 out.go:368] Setting JSON to false
	I1006 19:56:17.568588  208670 mustload.go:65] Loading cluster: embed-certs-830393
	I1006 19:56:17.568986  208670 config.go:182] Loaded profile config "embed-certs-830393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:56:17.569453  208670 cli_runner.go:164] Run: docker container inspect embed-certs-830393 --format={{.State.Status}}
	I1006 19:56:17.590531  208670 host.go:66] Checking if "embed-certs-830393" exists ...
	I1006 19:56:17.590859  208670 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:56:17.657190  208670 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-06 19:56:17.647613702 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:56:17.657899  208670 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-830393 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1006 19:56:17.661659  208670 out.go:179] * Pausing node embed-certs-830393 ... 
	I1006 19:56:17.664760  208670 host.go:66] Checking if "embed-certs-830393" exists ...
	I1006 19:56:17.665114  208670 ssh_runner.go:195] Run: systemctl --version
	I1006 19:56:17.665169  208670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-830393
	I1006 19:56:17.689299  208670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33075 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/embed-certs-830393/id_rsa Username:docker}
	I1006 19:56:17.786634  208670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:56:17.805357  208670 pause.go:51] kubelet running: true
	I1006 19:56:17.805433  208670 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1006 19:56:18.081391  208670 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1006 19:56:18.081492  208670 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1006 19:56:18.159043  208670 cri.go:89] found id: "f4d2a9fccfdd6f393221414a6c0aa8a77d35201a86d3a04c6be9f91649344122"
	I1006 19:56:18.159066  208670 cri.go:89] found id: "f683a297d5110be5e2db1398d5d9f375e296e6251a59736d916eadc918fcd276"
	I1006 19:56:18.159074  208670 cri.go:89] found id: "e94dede38eab415b5d932275312bb9c5abecf5f78916a03894c7a79d42672cd6"
	I1006 19:56:18.159079  208670 cri.go:89] found id: "53fa3c45d9d3fe4b234e83920d4c40b4da0c696f8345d0661f1282310ea0a883"
	I1006 19:56:18.159082  208670 cri.go:89] found id: "e2adf0d6acf04da953a1454524da4257845c1f019a6c67dbf5feef047497298e"
	I1006 19:56:18.159085  208670 cri.go:89] found id: "feaea5c59158221450ddfd0bf1e25c002e959fd1e69f33f065cfebc4fa3ff1dc"
	I1006 19:56:18.159106  208670 cri.go:89] found id: "5aac2d3517c5e661bd5a3fa05304330ba6bdf4006024e9d2204b7537e85be506"
	I1006 19:56:18.159116  208670 cri.go:89] found id: "5d7f3ed0461886cd83205860bc71296168bd61a981970cd2dc78f60ba117030e"
	I1006 19:56:18.159120  208670 cri.go:89] found id: "1ca5cdfb3593e4dad733094500a9fe2b21c50f4f50a60370c3764e89baad98ac"
	I1006 19:56:18.159126  208670 cri.go:89] found id: "402ce39bb02015c167602ffe294d21c2474937b76810fa99052cb19af2dbce0b"
	I1006 19:56:18.159130  208670 cri.go:89] found id: "02e72812046572c20424f4bde19f4301ef6c72f7c0f044b1a3c716bcfe7a2943"
	I1006 19:56:18.159133  208670 cri.go:89] found id: ""
	I1006 19:56:18.159196  208670 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 19:56:18.170567  208670 retry.go:31] will retry after 222.636404ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:56:18Z" level=error msg="open /run/runc: no such file or directory"
	I1006 19:56:18.394015  208670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:56:18.407445  208670 pause.go:51] kubelet running: false
	I1006 19:56:18.407539  208670 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1006 19:56:18.589334  208670 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1006 19:56:18.589436  208670 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1006 19:56:18.656013  208670 cri.go:89] found id: "f4d2a9fccfdd6f393221414a6c0aa8a77d35201a86d3a04c6be9f91649344122"
	I1006 19:56:18.656035  208670 cri.go:89] found id: "f683a297d5110be5e2db1398d5d9f375e296e6251a59736d916eadc918fcd276"
	I1006 19:56:18.656040  208670 cri.go:89] found id: "e94dede38eab415b5d932275312bb9c5abecf5f78916a03894c7a79d42672cd6"
	I1006 19:56:18.656044  208670 cri.go:89] found id: "53fa3c45d9d3fe4b234e83920d4c40b4da0c696f8345d0661f1282310ea0a883"
	I1006 19:56:18.656048  208670 cri.go:89] found id: "e2adf0d6acf04da953a1454524da4257845c1f019a6c67dbf5feef047497298e"
	I1006 19:56:18.656052  208670 cri.go:89] found id: "feaea5c59158221450ddfd0bf1e25c002e959fd1e69f33f065cfebc4fa3ff1dc"
	I1006 19:56:18.656055  208670 cri.go:89] found id: "5aac2d3517c5e661bd5a3fa05304330ba6bdf4006024e9d2204b7537e85be506"
	I1006 19:56:18.656058  208670 cri.go:89] found id: "5d7f3ed0461886cd83205860bc71296168bd61a981970cd2dc78f60ba117030e"
	I1006 19:56:18.656062  208670 cri.go:89] found id: "1ca5cdfb3593e4dad733094500a9fe2b21c50f4f50a60370c3764e89baad98ac"
	I1006 19:56:18.656067  208670 cri.go:89] found id: "402ce39bb02015c167602ffe294d21c2474937b76810fa99052cb19af2dbce0b"
	I1006 19:56:18.656071  208670 cri.go:89] found id: "02e72812046572c20424f4bde19f4301ef6c72f7c0f044b1a3c716bcfe7a2943"
	I1006 19:56:18.656074  208670 cri.go:89] found id: ""
	I1006 19:56:18.656129  208670 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 19:56:18.667373  208670 retry.go:31] will retry after 252.074993ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:56:18Z" level=error msg="open /run/runc: no such file or directory"
	I1006 19:56:18.919860  208670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:56:18.932957  208670 pause.go:51] kubelet running: false
	I1006 19:56:18.933057  208670 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1006 19:56:19.122357  208670 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1006 19:56:19.122465  208670 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1006 19:56:19.197445  208670 cri.go:89] found id: "f4d2a9fccfdd6f393221414a6c0aa8a77d35201a86d3a04c6be9f91649344122"
	I1006 19:56:19.197472  208670 cri.go:89] found id: "f683a297d5110be5e2db1398d5d9f375e296e6251a59736d916eadc918fcd276"
	I1006 19:56:19.197477  208670 cri.go:89] found id: "e94dede38eab415b5d932275312bb9c5abecf5f78916a03894c7a79d42672cd6"
	I1006 19:56:19.197480  208670 cri.go:89] found id: "53fa3c45d9d3fe4b234e83920d4c40b4da0c696f8345d0661f1282310ea0a883"
	I1006 19:56:19.197484  208670 cri.go:89] found id: "e2adf0d6acf04da953a1454524da4257845c1f019a6c67dbf5feef047497298e"
	I1006 19:56:19.197487  208670 cri.go:89] found id: "feaea5c59158221450ddfd0bf1e25c002e959fd1e69f33f065cfebc4fa3ff1dc"
	I1006 19:56:19.197491  208670 cri.go:89] found id: "5aac2d3517c5e661bd5a3fa05304330ba6bdf4006024e9d2204b7537e85be506"
	I1006 19:56:19.197494  208670 cri.go:89] found id: "5d7f3ed0461886cd83205860bc71296168bd61a981970cd2dc78f60ba117030e"
	I1006 19:56:19.197497  208670 cri.go:89] found id: "1ca5cdfb3593e4dad733094500a9fe2b21c50f4f50a60370c3764e89baad98ac"
	I1006 19:56:19.197524  208670 cri.go:89] found id: "402ce39bb02015c167602ffe294d21c2474937b76810fa99052cb19af2dbce0b"
	I1006 19:56:19.197532  208670 cri.go:89] found id: "02e72812046572c20424f4bde19f4301ef6c72f7c0f044b1a3c716bcfe7a2943"
	I1006 19:56:19.197535  208670 cri.go:89] found id: ""
	I1006 19:56:19.197583  208670 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 19:56:19.211220  208670 retry.go:31] will retry after 458.266855ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:56:19Z" level=error msg="open /run/runc: no such file or directory"
	I1006 19:56:19.669826  208670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:56:19.684714  208670 pause.go:51] kubelet running: false
	I1006 19:56:19.684853  208670 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1006 19:56:19.855752  208670 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1006 19:56:19.855833  208670 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1006 19:56:19.940975  208670 cri.go:89] found id: "f4d2a9fccfdd6f393221414a6c0aa8a77d35201a86d3a04c6be9f91649344122"
	I1006 19:56:19.940997  208670 cri.go:89] found id: "f683a297d5110be5e2db1398d5d9f375e296e6251a59736d916eadc918fcd276"
	I1006 19:56:19.941002  208670 cri.go:89] found id: "e94dede38eab415b5d932275312bb9c5abecf5f78916a03894c7a79d42672cd6"
	I1006 19:56:19.941007  208670 cri.go:89] found id: "53fa3c45d9d3fe4b234e83920d4c40b4da0c696f8345d0661f1282310ea0a883"
	I1006 19:56:19.941010  208670 cri.go:89] found id: "e2adf0d6acf04da953a1454524da4257845c1f019a6c67dbf5feef047497298e"
	I1006 19:56:19.941014  208670 cri.go:89] found id: "feaea5c59158221450ddfd0bf1e25c002e959fd1e69f33f065cfebc4fa3ff1dc"
	I1006 19:56:19.941017  208670 cri.go:89] found id: "5aac2d3517c5e661bd5a3fa05304330ba6bdf4006024e9d2204b7537e85be506"
	I1006 19:56:19.941021  208670 cri.go:89] found id: "5d7f3ed0461886cd83205860bc71296168bd61a981970cd2dc78f60ba117030e"
	I1006 19:56:19.941024  208670 cri.go:89] found id: "1ca5cdfb3593e4dad733094500a9fe2b21c50f4f50a60370c3764e89baad98ac"
	I1006 19:56:19.941030  208670 cri.go:89] found id: "402ce39bb02015c167602ffe294d21c2474937b76810fa99052cb19af2dbce0b"
	I1006 19:56:19.941059  208670 cri.go:89] found id: "02e72812046572c20424f4bde19f4301ef6c72f7c0f044b1a3c716bcfe7a2943"
	I1006 19:56:19.941069  208670 cri.go:89] found id: ""
	I1006 19:56:19.941141  208670 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 19:56:19.956186  208670 out.go:203] 
	W1006 19:56:19.958994  208670 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:56:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:56:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1006 19:56:19.959020  208670 out.go:285] * 
	* 
	W1006 19:56:19.963987  208670 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 19:56:19.967017  208670 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-830393 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-830393
helpers_test.go:243: (dbg) docker inspect embed-certs-830393:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "db0504489522a7f76847d7ee94754fe876f97596d0aefbe180d9ae204c670a0a",
	        "Created": "2025-10-06T19:53:31.962897615Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 202803,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T19:55:13.076516701Z",
	            "FinishedAt": "2025-10-06T19:55:12.118758111Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/db0504489522a7f76847d7ee94754fe876f97596d0aefbe180d9ae204c670a0a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/db0504489522a7f76847d7ee94754fe876f97596d0aefbe180d9ae204c670a0a/hostname",
	        "HostsPath": "/var/lib/docker/containers/db0504489522a7f76847d7ee94754fe876f97596d0aefbe180d9ae204c670a0a/hosts",
	        "LogPath": "/var/lib/docker/containers/db0504489522a7f76847d7ee94754fe876f97596d0aefbe180d9ae204c670a0a/db0504489522a7f76847d7ee94754fe876f97596d0aefbe180d9ae204c670a0a-json.log",
	        "Name": "/embed-certs-830393",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-830393:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-830393",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "db0504489522a7f76847d7ee94754fe876f97596d0aefbe180d9ae204c670a0a",
	                "LowerDir": "/var/lib/docker/overlay2/bd523f7fa06a7e0ad89b9a8ced25801bcb8b1dd5b3445e1afcebc1a259cb4596-init/diff:/var/lib/docker/overlay2/950dad8a7486c25883a9845454dded3949e8f3a53166d005e6749ff210bcab80/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bd523f7fa06a7e0ad89b9a8ced25801bcb8b1dd5b3445e1afcebc1a259cb4596/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bd523f7fa06a7e0ad89b9a8ced25801bcb8b1dd5b3445e1afcebc1a259cb4596/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bd523f7fa06a7e0ad89b9a8ced25801bcb8b1dd5b3445e1afcebc1a259cb4596/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-830393",
	                "Source": "/var/lib/docker/volumes/embed-certs-830393/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-830393",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-830393",
	                "name.minikube.sigs.k8s.io": "embed-certs-830393",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0c79102a749b776d621309ccb77b4cde76ba9fb6e7fcd479ce2a2d5384efbc26",
	            "SandboxKey": "/var/run/docker/netns/0c79102a749b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-830393": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:2a:d3:82:4e:91",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1800026322b057a83604241d8aa91bc0c8c07713c3ce5f5e76ba25af81a1e332",
	                    "EndpointID": "eeb0e0b03af579e23ecaefd123c161766c30e9c4815a2122317c1401a492670e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-830393",
	                        "db0504489522"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-830393 -n embed-certs-830393
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-830393 -n embed-certs-830393: exit status 2 (388.098526ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-830393 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-830393 logs -n 25: (1.330768263s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-100545 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-100545       │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │ 06 Oct 25 19:51 UTC │
	│ start   │ -p old-k8s-version-100545 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-100545       │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │ 06 Oct 25 19:52 UTC │
	│ start   │ -p cert-expiration-585086 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-585086       │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │ 06 Oct 25 19:53 UTC │
	│ image   │ old-k8s-version-100545 image list --format=json                                                                                                                                                                                               │ old-k8s-version-100545       │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:52 UTC │
	│ pause   │ -p old-k8s-version-100545 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-100545       │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │                     │
	│ delete  │ -p old-k8s-version-100545                                                                                                                                                                                                                     │ old-k8s-version-100545       │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:52 UTC │
	│ delete  │ -p old-k8s-version-100545                                                                                                                                                                                                                     │ old-k8s-version-100545       │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:52 UTC │
	│ start   │ -p no-preload-314275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:53 UTC │
	│ delete  │ -p cert-expiration-585086                                                                                                                                                                                                                     │ cert-expiration-585086       │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │ 06 Oct 25 19:53 UTC │
	│ start   │ -p embed-certs-830393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │ 06 Oct 25 19:54 UTC │
	│ addons  │ enable metrics-server -p no-preload-314275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │                     │
	│ stop    │ -p no-preload-314275 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │ 06 Oct 25 19:54 UTC │
	│ addons  │ enable dashboard -p no-preload-314275 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:54 UTC │ 06 Oct 25 19:54 UTC │
	│ start   │ -p no-preload-314275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:54 UTC │ 06 Oct 25 19:55 UTC │
	│ addons  │ enable metrics-server -p embed-certs-830393 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:54 UTC │                     │
	│ stop    │ -p embed-certs-830393 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ addons  │ enable dashboard -p embed-certs-830393 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ pause   │ -p no-preload-314275 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │                     │
	│ start   │ -p embed-certs-830393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:56 UTC │
	│ delete  │ -p no-preload-314275                                                                                                                                                                                                                          │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ delete  │ -p no-preload-314275                                                                                                                                                                                                                          │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ delete  │ -p disable-driver-mounts-932453                                                                                                                                                                                                               │ disable-driver-mounts-932453 │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ start   │ -p default-k8s-diff-port-997276 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │                     │
	│ image   │ embed-certs-830393 image list --format=json                                                                                                                                                                                                   │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │ 06 Oct 25 19:56 UTC │
	│ pause   │ -p embed-certs-830393 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 19:55:24
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 19:55:24.010489  205530 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:55:24.010743  205530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:55:24.010778  205530 out.go:374] Setting ErrFile to fd 2...
	I1006 19:55:24.010800  205530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:55:24.011093  205530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:55:24.011562  205530 out.go:368] Setting JSON to false
	I1006 19:55:24.012571  205530 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5859,"bootTime":1759774665,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1006 19:55:24.012673  205530 start.go:140] virtualization:  
	I1006 19:55:24.016561  205530 out.go:179] * [default-k8s-diff-port-997276] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 19:55:24.020767  205530 notify.go:220] Checking for updates...
	I1006 19:55:24.024229  205530 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 19:55:24.027381  205530 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 19:55:24.030362  205530 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:55:24.033387  205530 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	I1006 19:55:24.036239  205530 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 19:55:24.039074  205530 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 19:55:24.042544  205530 config.go:182] Loaded profile config "embed-certs-830393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:55:24.042713  205530 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 19:55:24.110328  205530 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 19:55:24.110461  205530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:55:24.213486  205530 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-06 19:55:24.2001581 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:
/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:55:24.213597  205530 docker.go:318] overlay module found
	I1006 19:55:24.216955  205530 out.go:179] * Using the docker driver based on user configuration
	I1006 19:55:24.219889  205530 start.go:304] selected driver: docker
	I1006 19:55:24.219906  205530 start.go:924] validating driver "docker" against <nil>
	I1006 19:55:24.219919  205530 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 19:55:24.220663  205530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:55:24.321463  205530 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-06 19:55:24.306531279 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:55:24.321630  205530 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 19:55:24.321869  205530 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 19:55:24.324909  205530 out.go:179] * Using Docker driver with root privileges
	I1006 19:55:24.327756  205530 cni.go:84] Creating CNI manager for ""
	I1006 19:55:24.327825  205530 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:55:24.327837  205530 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 19:55:24.327907  205530 start.go:348] cluster config:
	{Name:default-k8s-diff-port-997276 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-997276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:55:24.331041  205530 out.go:179] * Starting "default-k8s-diff-port-997276" primary control-plane node in "default-k8s-diff-port-997276" cluster
	I1006 19:55:24.333882  205530 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 19:55:24.336763  205530 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 19:55:24.339607  205530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:55:24.339660  205530 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1006 19:55:24.339670  205530 cache.go:58] Caching tarball of preloaded images
	I1006 19:55:24.339794  205530 preload.go:233] Found /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1006 19:55:24.339804  205530 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 19:55:24.339920  205530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/config.json ...
	I1006 19:55:24.339939  205530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/config.json: {Name:mkd5ce4a3412eea9ac3e4f2f74bfe10ec01e14ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:55:24.340102  205530 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 19:55:24.372782  205530 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 19:55:24.372802  205530 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 19:55:24.372824  205530 cache.go:232] Successfully downloaded all kic artifacts
	I1006 19:55:24.372849  205530 start.go:360] acquireMachinesLock for default-k8s-diff-port-997276: {Name:mk7b25a356bfff93cc3ef03a69dea8b7e852b578 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 19:55:24.372962  205530 start.go:364] duration metric: took 95.214µs to acquireMachinesLock for "default-k8s-diff-port-997276"
	I1006 19:55:24.372992  205530 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-997276 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-997276 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 19:55:24.373063  205530 start.go:125] createHost starting for "" (driver="docker")
	I1006 19:55:22.831989  202624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1006 19:55:24.376616  205530 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1006 19:55:24.376848  205530 start.go:159] libmachine.API.Create for "default-k8s-diff-port-997276" (driver="docker")
	I1006 19:55:24.376887  205530 client.go:168] LocalClient.Create starting
	I1006 19:55:24.376976  205530 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem
	I1006 19:55:24.377013  205530 main.go:141] libmachine: Decoding PEM data...
	I1006 19:55:24.377026  205530 main.go:141] libmachine: Parsing certificate...
	I1006 19:55:24.377079  205530 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem
	I1006 19:55:24.377096  205530 main.go:141] libmachine: Decoding PEM data...
	I1006 19:55:24.377107  205530 main.go:141] libmachine: Parsing certificate...
	I1006 19:55:24.377468  205530 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-997276 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 19:55:24.404420  205530 cli_runner.go:211] docker network inspect default-k8s-diff-port-997276 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 19:55:24.404544  205530 network_create.go:284] running [docker network inspect default-k8s-diff-port-997276] to gather additional debugging logs...
	I1006 19:55:24.404569  205530 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-997276
	W1006 19:55:24.445564  205530 cli_runner.go:211] docker network inspect default-k8s-diff-port-997276 returned with exit code 1
	I1006 19:55:24.445591  205530 network_create.go:287] error running [docker network inspect default-k8s-diff-port-997276]: docker network inspect default-k8s-diff-port-997276: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-997276 not found
	I1006 19:55:24.445616  205530 network_create.go:289] output of [docker network inspect default-k8s-diff-port-997276]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-997276 not found
	
	** /stderr **
	I1006 19:55:24.445715  205530 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:55:24.470853  205530 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7058eae896da IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:57:c0:bd:de:1b} reservation:<nil>}
	I1006 19:55:24.471156  205530 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e321ce8cf6dc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:12:e6:76:87:fe:bb} reservation:<nil>}
	I1006 19:55:24.471475  205530 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-17bdbd7bd7be IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:22:93:f3:19:55:10} reservation:<nil>}
	I1006 19:55:24.471918  205530 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c7860}
	I1006 19:55:24.471936  205530 network_create.go:124] attempt to create docker network default-k8s-diff-port-997276 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1006 19:55:24.471996  205530 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-997276 default-k8s-diff-port-997276
	I1006 19:55:24.546512  205530 network_create.go:108] docker network default-k8s-diff-port-997276 192.168.76.0/24 created
	I1006 19:55:24.546540  205530 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-997276" container
	I1006 19:55:24.546609  205530 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 19:55:24.580477  205530 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-997276 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-997276 --label created_by.minikube.sigs.k8s.io=true
	I1006 19:55:24.609503  205530 oci.go:103] Successfully created a docker volume default-k8s-diff-port-997276
	I1006 19:55:24.609598  205530 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-997276-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-997276 --entrypoint /usr/bin/test -v default-k8s-diff-port-997276:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 19:55:25.350514  205530 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-997276
	I1006 19:55:25.350558  205530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:55:25.350577  205530 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 19:55:25.350642  205530 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-997276:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 19:55:28.473231  202624 node_ready.go:49] node "embed-certs-830393" is "Ready"
	I1006 19:55:28.473259  202624 node_ready.go:38] duration metric: took 6.448446215s for node "embed-certs-830393" to be "Ready" ...
	I1006 19:55:28.473274  202624 api_server.go:52] waiting for apiserver process to appear ...
	I1006 19:55:28.473327  202624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 19:55:31.140861  202624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.962086933s)
	I1006 19:55:31.140919  202624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.825069701s)
	I1006 19:55:31.141289  202624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.309263343s)
	I1006 19:55:31.141988  202624 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.668647106s)
	I1006 19:55:31.142006  202624 api_server.go:72] duration metric: took 9.648374145s to wait for apiserver process to appear ...
	I1006 19:55:31.142013  202624 api_server.go:88] waiting for apiserver healthz status ...
	I1006 19:55:31.142028  202624 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1006 19:55:31.145379  202624 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-830393 addons enable metrics-server
	
	I1006 19:55:31.190762  202624 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1006 19:55:31.190789  202624 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1006 19:55:31.241284  202624 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1006 19:55:31.244321  202624 addons.go:514] duration metric: took 9.750265425s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1006 19:55:31.642525  202624 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1006 19:55:31.650672  202624 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1006 19:55:31.651858  202624 api_server.go:141] control plane version: v1.34.1
	I1006 19:55:31.651888  202624 api_server.go:131] duration metric: took 509.868862ms to wait for apiserver health ...
	I1006 19:55:31.651898  202624 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 19:55:31.656078  202624 system_pods.go:59] 8 kube-system pods found
	I1006 19:55:31.656113  202624 system_pods.go:61] "coredns-66bc5c9577-8k4cq" [e6b9c4d9-313c-467e-b448-9867361a42fb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:55:31.656125  202624 system_pods.go:61] "etcd-embed-certs-830393" [94302cea-26c9-4ff5-bcff-ef3903324bfe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 19:55:31.656131  202624 system_pods.go:61] "kindnet-g7jnc" [4f869226-920a-4722-aa82-308466e32e59] Running
	I1006 19:55:31.656141  202624 system_pods.go:61] "kube-apiserver-embed-certs-830393" [7c31daf2-0bbe-4d0d-a233-79da10ee31b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 19:55:31.656151  202624 system_pods.go:61] "kube-controller-manager-embed-certs-830393" [5cdae168-9643-4ada-ac16-35f8f337d1fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 19:55:31.656163  202624 system_pods.go:61] "kube-proxy-xl5tt" [75361417-428d-4ea5-89ad-4570024b8916] Running
	I1006 19:55:31.656173  202624 system_pods.go:61] "kube-scheduler-embed-certs-830393" [be02b2b3-c2c3-458c-8ec2-3d6e2b334427] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 19:55:31.656179  202624 system_pods.go:61] "storage-provisioner" [c6173113-55a1-44a2-b622-34ba6868ea4c] Running
	I1006 19:55:31.656186  202624 system_pods.go:74] duration metric: took 4.282473ms to wait for pod list to return data ...
	I1006 19:55:31.656195  202624 default_sa.go:34] waiting for default service account to be created ...
	I1006 19:55:31.658343  202624 default_sa.go:45] found service account: "default"
	I1006 19:55:31.658365  202624 default_sa.go:55] duration metric: took 2.163904ms for default service account to be created ...
	I1006 19:55:31.658383  202624 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 19:55:31.661623  202624 system_pods.go:86] 8 kube-system pods found
	I1006 19:55:31.661657  202624 system_pods.go:89] "coredns-66bc5c9577-8k4cq" [e6b9c4d9-313c-467e-b448-9867361a42fb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:55:31.661666  202624 system_pods.go:89] "etcd-embed-certs-830393" [94302cea-26c9-4ff5-bcff-ef3903324bfe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 19:55:31.661675  202624 system_pods.go:89] "kindnet-g7jnc" [4f869226-920a-4722-aa82-308466e32e59] Running
	I1006 19:55:31.661683  202624 system_pods.go:89] "kube-apiserver-embed-certs-830393" [7c31daf2-0bbe-4d0d-a233-79da10ee31b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 19:55:31.661689  202624 system_pods.go:89] "kube-controller-manager-embed-certs-830393" [5cdae168-9643-4ada-ac16-35f8f337d1fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 19:55:31.661694  202624 system_pods.go:89] "kube-proxy-xl5tt" [75361417-428d-4ea5-89ad-4570024b8916] Running
	I1006 19:55:31.661701  202624 system_pods.go:89] "kube-scheduler-embed-certs-830393" [be02b2b3-c2c3-458c-8ec2-3d6e2b334427] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 19:55:31.661709  202624 system_pods.go:89] "storage-provisioner" [c6173113-55a1-44a2-b622-34ba6868ea4c] Running
	I1006 19:55:31.661717  202624 system_pods.go:126] duration metric: took 3.328077ms to wait for k8s-apps to be running ...
	I1006 19:55:31.661731  202624 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 19:55:31.661793  202624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:55:31.675095  202624 system_svc.go:56] duration metric: took 13.356417ms WaitForService to wait for kubelet
	I1006 19:55:31.675124  202624 kubeadm.go:586] duration metric: took 10.181489333s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 19:55:31.675143  202624 node_conditions.go:102] verifying NodePressure condition ...
	I1006 19:55:31.678230  202624 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 19:55:31.678263  202624 node_conditions.go:123] node cpu capacity is 2
	I1006 19:55:31.678276  202624 node_conditions.go:105] duration metric: took 3.108471ms to run NodePressure ...
	I1006 19:55:31.678288  202624 start.go:241] waiting for startup goroutines ...
	I1006 19:55:31.678295  202624 start.go:246] waiting for cluster config update ...
	I1006 19:55:31.678307  202624 start.go:255] writing updated cluster config ...
	I1006 19:55:31.678603  202624 ssh_runner.go:195] Run: rm -f paused
	I1006 19:55:31.682169  202624 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 19:55:31.685921  202624 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8k4cq" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:55:30.173219  205530 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-997276:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.822539425s)
	I1006 19:55:30.173247  205530 kic.go:203] duration metric: took 4.822666796s to extract preloaded images to volume ...
	W1006 19:55:30.173400  205530 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1006 19:55:30.173505  205530 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 19:55:30.308102  205530 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-997276 --name default-k8s-diff-port-997276 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-997276 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-997276 --network default-k8s-diff-port-997276 --ip 192.168.76.2 --volume default-k8s-diff-port-997276:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 19:55:30.701433  205530 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997276 --format={{.State.Running}}
	I1006 19:55:30.729527  205530 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997276 --format={{.State.Status}}
	I1006 19:55:30.761882  205530 cli_runner.go:164] Run: docker exec default-k8s-diff-port-997276 stat /var/lib/dpkg/alternatives/iptables
	I1006 19:55:30.828255  205530 oci.go:144] the created container "default-k8s-diff-port-997276" has a running status.
	I1006 19:55:30.828290  205530 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa...
	I1006 19:55:31.050700  205530 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 19:55:31.075268  205530 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997276 --format={{.State.Status}}
	I1006 19:55:31.098858  205530 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 19:55:31.098885  205530 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-997276 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 19:55:31.169635  205530 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997276 --format={{.State.Status}}
	I1006 19:55:31.195559  205530 machine.go:93] provisionDockerMachine start ...
	I1006 19:55:31.195661  205530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:55:31.221188  205530 main.go:141] libmachine: Using SSH client type: native
	I1006 19:55:31.221519  205530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33080 <nil> <nil>}
	I1006 19:55:31.221528  205530 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 19:55:31.222122  205530 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	W1006 19:55:33.695245  202624 pod_ready.go:104] pod "coredns-66bc5c9577-8k4cq" is not "Ready", error: <nil>
	W1006 19:55:36.191457  202624 pod_ready.go:104] pod "coredns-66bc5c9577-8k4cq" is not "Ready", error: <nil>
	I1006 19:55:34.367556  205530 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-997276
	
	I1006 19:55:34.367590  205530 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-997276"
	I1006 19:55:34.367656  205530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:55:34.390836  205530 main.go:141] libmachine: Using SSH client type: native
	I1006 19:55:34.391147  205530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33080 <nil> <nil>}
	I1006 19:55:34.391169  205530 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-997276 && echo "default-k8s-diff-port-997276" | sudo tee /etc/hostname
	I1006 19:55:34.549886  205530 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-997276
	
	I1006 19:55:34.550002  205530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:55:34.573291  205530 main.go:141] libmachine: Using SSH client type: native
	I1006 19:55:34.573617  205530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33080 <nil> <nil>}
	I1006 19:55:34.573640  205530 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-997276' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-997276/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-997276' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 19:55:34.724422  205530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 19:55:34.724500  205530 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-2540/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-2540/.minikube}
	I1006 19:55:34.724535  205530 ubuntu.go:190] setting up certificates
	I1006 19:55:34.724571  205530 provision.go:84] configureAuth start
	I1006 19:55:34.724673  205530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-997276
	I1006 19:55:34.748293  205530 provision.go:143] copyHostCerts
	I1006 19:55:34.748353  205530 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem, removing ...
	I1006 19:55:34.748361  205530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem
	I1006 19:55:34.748431  205530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem (1675 bytes)
	I1006 19:55:34.748518  205530 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem, removing ...
	I1006 19:55:34.748524  205530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem
	I1006 19:55:34.748549  205530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem (1082 bytes)
	I1006 19:55:34.748596  205530 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem, removing ...
	I1006 19:55:34.748605  205530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem
	I1006 19:55:34.748628  205530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem (1123 bytes)
	I1006 19:55:34.748686  205530 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-997276 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-997276 localhost minikube]
	I1006 19:55:35.236098  205530 provision.go:177] copyRemoteCerts
	I1006 19:55:35.236217  205530 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 19:55:35.236296  205530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:55:35.255957  205530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:55:35.368629  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 19:55:35.404243  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 19:55:35.429211  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1006 19:55:35.450593  205530 provision.go:87] duration metric: took 725.982028ms to configureAuth
	I1006 19:55:35.450704  205530 ubuntu.go:206] setting minikube options for container-runtime
	I1006 19:55:35.450934  205530 config.go:182] Loaded profile config "default-k8s-diff-port-997276": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:55:35.451074  205530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:55:35.472704  205530 main.go:141] libmachine: Using SSH client type: native
	I1006 19:55:35.473002  205530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33080 <nil> <nil>}
	I1006 19:55:35.473017  205530 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 19:55:35.896747  205530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 19:55:35.896768  205530 machine.go:96] duration metric: took 4.701180081s to provisionDockerMachine
	I1006 19:55:35.896778  205530 client.go:171] duration metric: took 11.519885708s to LocalClient.Create
	I1006 19:55:35.896799  205530 start.go:167] duration metric: took 11.519952204s to libmachine.API.Create "default-k8s-diff-port-997276"
	I1006 19:55:35.896806  205530 start.go:293] postStartSetup for "default-k8s-diff-port-997276" (driver="docker")
	I1006 19:55:35.896816  205530 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 19:55:35.896896  205530 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 19:55:35.896937  205530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:55:35.924989  205530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:55:36.037865  205530 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 19:55:36.042141  205530 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 19:55:36.042174  205530 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 19:55:36.042185  205530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/addons for local assets ...
	I1006 19:55:36.042257  205530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/files for local assets ...
	I1006 19:55:36.042356  205530 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> 43502.pem in /etc/ssl/certs
	I1006 19:55:36.042465  205530 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 19:55:36.054578  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:55:36.081880  205530 start.go:296] duration metric: took 185.059839ms for postStartSetup
	I1006 19:55:36.082245  205530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-997276
	I1006 19:55:36.110424  205530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/config.json ...
	I1006 19:55:36.110718  205530 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 19:55:36.110780  205530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:55:36.135102  205530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:55:36.233218  205530 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 19:55:36.240232  205530 start.go:128] duration metric: took 11.867153691s to createHost
	I1006 19:55:36.240258  205530 start.go:83] releasing machines lock for "default-k8s-diff-port-997276", held for 11.867287314s
	I1006 19:55:36.240358  205530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-997276
	I1006 19:55:36.271876  205530 ssh_runner.go:195] Run: cat /version.json
	I1006 19:55:36.271927  205530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:55:36.272211  205530 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 19:55:36.272260  205530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:55:36.308533  205530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:55:36.311617  205530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:55:36.517106  205530 ssh_runner.go:195] Run: systemctl --version
	I1006 19:55:36.529300  205530 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 19:55:36.584053  205530 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 19:55:36.589347  205530 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 19:55:36.589418  205530 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 19:55:36.630789  205530 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1006 19:55:36.630864  205530 start.go:495] detecting cgroup driver to use...
	I1006 19:55:36.630922  205530 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 19:55:36.630996  205530 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 19:55:36.653757  205530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 19:55:36.669282  205530 docker.go:218] disabling cri-docker service (if available) ...
	I1006 19:55:36.669412  205530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 19:55:36.687827  205530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 19:55:36.709873  205530 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 19:55:36.873054  205530 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 19:55:37.061088  205530 docker.go:234] disabling docker service ...
	I1006 19:55:37.061205  205530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 19:55:37.085201  205530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 19:55:37.099510  205530 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 19:55:37.273048  205530 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 19:55:37.439166  205530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 19:55:37.454398  205530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 19:55:37.470811  205530 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 19:55:37.470914  205530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:55:37.480892  205530 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 19:55:37.480996  205530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:55:37.490619  205530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:55:37.500627  205530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:55:37.510371  205530 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 19:55:37.519984  205530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:55:37.530153  205530 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:55:37.546529  205530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:55:37.558696  205530 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 19:55:37.568084  205530 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 19:55:37.577070  205530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:55:37.740653  205530 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 19:55:38.139465  205530 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 19:55:38.139601  205530 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 19:55:38.146699  205530 start.go:563] Will wait 60s for crictl version
	I1006 19:55:38.146831  205530 ssh_runner.go:195] Run: which crictl
	I1006 19:55:38.151362  205530 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 19:55:38.210220  205530 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 19:55:38.210408  205530 ssh_runner.go:195] Run: crio --version
	I1006 19:55:38.249457  205530 ssh_runner.go:195] Run: crio --version
	I1006 19:55:38.292555  205530 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 19:55:38.295940  205530 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-997276 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:55:38.318849  205530 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1006 19:55:38.323415  205530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:55:38.337204  205530 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-997276 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-997276 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 19:55:38.337323  205530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:55:38.337383  205530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:55:38.383180  205530 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:55:38.383205  205530 crio.go:433] Images already preloaded, skipping extraction
	I1006 19:55:38.383267  205530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:55:38.412212  205530 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:55:38.412235  205530 cache_images.go:85] Images are preloaded, skipping loading
	I1006 19:55:38.412244  205530 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1006 19:55:38.412330  205530 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-997276 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-997276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 19:55:38.412414  205530 ssh_runner.go:195] Run: crio config
	I1006 19:55:38.488581  205530 cni.go:84] Creating CNI manager for ""
	I1006 19:55:38.488606  205530 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:55:38.488623  205530 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 19:55:38.488648  205530 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-997276 NodeName:default-k8s-diff-port-997276 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 19:55:38.488785  205530 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-997276"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 19:55:38.488861  205530 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 19:55:38.497928  205530 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 19:55:38.497999  205530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 19:55:38.506369  205530 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1006 19:55:38.521658  205530 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 19:55:38.536419  205530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1006 19:55:38.550863  205530 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1006 19:55:38.554837  205530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:55:38.565475  205530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:55:38.737921  205530 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:55:38.763116  205530 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276 for IP: 192.168.76.2
	I1006 19:55:38.763183  205530 certs.go:195] generating shared ca certs ...
	I1006 19:55:38.763221  205530 certs.go:227] acquiring lock for ca certs: {Name:mke29bef1b13829052576090d66d8864f7cbc64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:55:38.763408  205530 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key
	I1006 19:55:38.763478  205530 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key
	I1006 19:55:38.763500  205530 certs.go:257] generating profile certs ...
	I1006 19:55:38.763608  205530 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/client.key
	I1006 19:55:38.763646  205530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/client.crt with IP's: []
	I1006 19:55:38.856930  205530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/client.crt ...
	I1006 19:55:38.860985  205530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/client.crt: {Name:mkf67fc2f64c7a2ccbcdd74677adbd93488d1642 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:55:38.861223  205530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/client.key ...
	I1006 19:55:38.861297  205530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/client.key: {Name:mk8ac8080717b75118d00305ee1ec5b75f55e0eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:55:38.861460  205530 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.key.24aba503
	I1006 19:55:38.861510  205530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.crt.24aba503 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1006 19:55:39.393830  205530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.crt.24aba503 ...
	I1006 19:55:39.393912  205530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.crt.24aba503: {Name:mk87297d9542e70bca499539cae71710f9d860ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:55:39.395001  205530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.key.24aba503 ...
	I1006 19:55:39.395051  205530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.key.24aba503: {Name:mkcb2f53bece7cc459436bfb9db916aaf6edf647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:55:39.395196  205530 certs.go:382] copying /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.crt.24aba503 -> /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.crt
	I1006 19:55:39.395336  205530 certs.go:386] copying /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.key.24aba503 -> /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.key
	I1006 19:55:39.397596  205530 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/proxy-client.key
	I1006 19:55:39.397640  205530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/proxy-client.crt with IP's: []
	I1006 19:55:40.119973  205530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/proxy-client.crt ...
	I1006 19:55:40.120047  205530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/proxy-client.crt: {Name:mk227bc98fe0b9db02e4dc4b2380f575480009a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:55:40.120267  205530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/proxy-client.key ...
	I1006 19:55:40.120303  205530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/proxy-client.key: {Name:mk00a4f139a7cedbae2b90cf02b8ac6d766984a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:55:40.120574  205530 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem (1338 bytes)
	W1006 19:55:40.120647  205530 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350_empty.pem, impossibly tiny 0 bytes
	I1006 19:55:40.120671  205530 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 19:55:40.120726  205530 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem (1082 bytes)
	I1006 19:55:40.120776  205530 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem (1123 bytes)
	I1006 19:55:40.120834  205530 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem (1675 bytes)
	I1006 19:55:40.120909  205530 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:55:40.121520  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 19:55:40.145770  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 19:55:40.176613  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 19:55:40.203165  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 19:55:40.227418  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1006 19:55:40.249997  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 19:55:40.273550  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 19:55:40.325650  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 19:55:40.375797  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 19:55:40.398156  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem --> /usr/share/ca-certificates/4350.pem (1338 bytes)
	I1006 19:55:40.429456  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /usr/share/ca-certificates/43502.pem (1708 bytes)
	I1006 19:55:40.452237  205530 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 19:55:40.467345  205530 ssh_runner.go:195] Run: openssl version
	I1006 19:55:40.474454  205530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 19:55:40.483956  205530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:55:40.488271  205530 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:55:40.488412  205530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:55:40.532245  205530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 19:55:40.541271  205530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4350.pem && ln -fs /usr/share/ca-certificates/4350.pem /etc/ssl/certs/4350.pem"
	I1006 19:55:40.551493  205530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4350.pem
	I1006 19:55:40.556121  205530 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 18:49 /usr/share/ca-certificates/4350.pem
	I1006 19:55:40.556266  205530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4350.pem
	I1006 19:55:40.598715  205530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4350.pem /etc/ssl/certs/51391683.0"
	I1006 19:55:40.607644  205530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43502.pem && ln -fs /usr/share/ca-certificates/43502.pem /etc/ssl/certs/43502.pem"
	I1006 19:55:40.617099  205530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43502.pem
	I1006 19:55:40.621486  205530 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 18:49 /usr/share/ca-certificates/43502.pem
	I1006 19:55:40.621600  205530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43502.pem
	I1006 19:55:40.663975  205530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43502.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 19:55:40.673760  205530 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 19:55:40.678486  205530 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 19:55:40.678611  205530 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-997276 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-997276 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:55:40.678760  205530 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 19:55:40.678864  205530 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 19:55:40.709144  205530 cri.go:89] found id: ""
	I1006 19:55:40.709269  205530 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 19:55:40.719615  205530 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 19:55:40.728424  205530 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 19:55:40.728549  205530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 19:55:40.739447  205530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 19:55:40.739525  205530 kubeadm.go:157] found existing configuration files:
	
	I1006 19:55:40.739607  205530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1006 19:55:40.749173  205530 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 19:55:40.749289  205530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 19:55:40.758606  205530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1006 19:55:40.767967  205530 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 19:55:40.768080  205530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 19:55:40.776501  205530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1006 19:55:40.785724  205530 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 19:55:40.785839  205530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 19:55:40.794336  205530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1006 19:55:40.803201  205530 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 19:55:40.803310  205530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 19:55:40.811046  205530 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 19:55:40.861518  205530 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 19:55:40.862064  205530 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 19:55:40.904437  205530 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 19:55:40.904787  205530 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1006 19:55:40.904855  205530 kubeadm.go:318] OS: Linux
	I1006 19:55:40.904955  205530 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 19:55:40.905042  205530 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1006 19:55:40.905142  205530 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 19:55:40.905232  205530 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 19:55:40.905311  205530 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 19:55:40.905382  205530 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 19:55:40.905468  205530 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 19:55:40.905561  205530 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 19:55:40.905640  205530 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1006 19:55:41.008488  205530 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 19:55:41.008675  205530 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 19:55:41.008816  205530 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 19:55:41.025705  205530 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1006 19:55:38.192472  202624 pod_ready.go:104] pod "coredns-66bc5c9577-8k4cq" is not "Ready", error: <nil>
	W1006 19:55:40.194306  202624 pod_ready.go:104] pod "coredns-66bc5c9577-8k4cq" is not "Ready", error: <nil>
	W1006 19:55:42.195773  202624 pod_ready.go:104] pod "coredns-66bc5c9577-8k4cq" is not "Ready", error: <nil>
	I1006 19:55:41.032323  205530 out.go:252]   - Generating certificates and keys ...
	I1006 19:55:41.032497  205530 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 19:55:41.032603  205530 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 19:55:41.729302  205530 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 19:55:41.888092  205530 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 19:55:42.212259  205530 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 19:55:43.105882  205530 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 19:55:43.587669  205530 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 19:55:43.588343  205530 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-997276 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	W1006 19:55:44.696986  202624 pod_ready.go:104] pod "coredns-66bc5c9577-8k4cq" is not "Ready", error: <nil>
	W1006 19:55:47.192410  202624 pod_ready.go:104] pod "coredns-66bc5c9577-8k4cq" is not "Ready", error: <nil>
	I1006 19:55:44.240678  205530 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 19:55:44.241269  205530 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-997276 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1006 19:55:45.671407  205530 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 19:55:45.777740  205530 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 19:55:46.726732  205530 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 19:55:46.727296  205530 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 19:55:47.299820  205530 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 19:55:47.592234  205530 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 19:55:48.303930  205530 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 19:55:48.704894  205530 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 19:55:49.395205  205530 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 19:55:49.396209  205530 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 19:55:49.399087  205530 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1006 19:55:49.192658  202624 pod_ready.go:104] pod "coredns-66bc5c9577-8k4cq" is not "Ready", error: <nil>
	W1006 19:55:51.193231  202624 pod_ready.go:104] pod "coredns-66bc5c9577-8k4cq" is not "Ready", error: <nil>
	I1006 19:55:49.402724  205530 out.go:252]   - Booting up control plane ...
	I1006 19:55:49.402848  205530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 19:55:49.402948  205530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 19:55:49.403023  205530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 19:55:49.422181  205530 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 19:55:49.422560  205530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 19:55:49.430164  205530 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 19:55:49.430477  205530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 19:55:49.430527  205530 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 19:55:49.576158  205530 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 19:55:49.576287  205530 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 19:55:51.072117  205530 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501518624s
	I1006 19:55:51.075863  205530 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 19:55:51.075970  205530 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1006 19:55:51.076266  205530 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 19:55:51.076362  205530 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 19:55:54.597958  205530 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.521523913s
	I1006 19:55:57.470529  205530 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.394653523s
	I1006 19:55:57.578227  205530 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502124141s
	I1006 19:55:57.601203  205530 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1006 19:55:57.613770  205530 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1006 19:55:57.632952  205530 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1006 19:55:57.633175  205530 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-997276 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1006 19:55:57.644776  205530 kubeadm.go:318] [bootstrap-token] Using token: k5hkyd.rr7hzkr13nxttfpd
	W1006 19:55:53.694289  202624 pod_ready.go:104] pod "coredns-66bc5c9577-8k4cq" is not "Ready", error: <nil>
	W1006 19:55:56.192861  202624 pod_ready.go:104] pod "coredns-66bc5c9577-8k4cq" is not "Ready", error: <nil>
	I1006 19:55:57.647790  205530 out.go:252]   - Configuring RBAC rules ...
	I1006 19:55:57.647939  205530 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1006 19:55:57.657314  205530 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1006 19:55:57.666020  205530 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1006 19:55:57.670392  205530 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1006 19:55:57.674680  205530 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1006 19:55:57.678892  205530 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1006 19:55:57.985681  205530 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1006 19:55:58.455342  205530 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1006 19:55:58.988302  205530 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1006 19:55:58.989748  205530 kubeadm.go:318] 
	I1006 19:55:58.989825  205530 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1006 19:55:58.989831  205530 kubeadm.go:318] 
	I1006 19:55:58.989915  205530 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1006 19:55:58.989920  205530 kubeadm.go:318] 
	I1006 19:55:58.989947  205530 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1006 19:55:58.990377  205530 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1006 19:55:58.990436  205530 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1006 19:55:58.990441  205530 kubeadm.go:318] 
	I1006 19:55:58.990497  205530 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1006 19:55:58.990502  205530 kubeadm.go:318] 
	I1006 19:55:58.990552  205530 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1006 19:55:58.990557  205530 kubeadm.go:318] 
	I1006 19:55:58.990611  205530 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1006 19:55:58.990689  205530 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1006 19:55:58.990760  205530 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1006 19:55:58.990764  205530 kubeadm.go:318] 
	I1006 19:55:58.991077  205530 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1006 19:55:58.991165  205530 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1006 19:55:58.991171  205530 kubeadm.go:318] 
	I1006 19:55:58.991461  205530 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token k5hkyd.rr7hzkr13nxttfpd \
	I1006 19:55:58.991574  205530 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:201c13f59a2d25d619915dd6f8e3b060debb373b102f8c8209c331adce5a89dd \
	I1006 19:55:58.991813  205530 kubeadm.go:318] 	--control-plane 
	I1006 19:55:58.991832  205530 kubeadm.go:318] 
	I1006 19:55:58.992161  205530 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1006 19:55:58.992172  205530 kubeadm.go:318] 
	I1006 19:55:58.992524  205530 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token k5hkyd.rr7hzkr13nxttfpd \
	I1006 19:55:58.992645  205530 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:201c13f59a2d25d619915dd6f8e3b060debb373b102f8c8209c331adce5a89dd 
	I1006 19:55:58.997155  205530 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1006 19:55:58.997429  205530 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1006 19:55:58.997559  205530 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 19:55:58.997587  205530 cni.go:84] Creating CNI manager for ""
	I1006 19:55:58.997595  205530 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:55:59.000638  205530 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1006 19:55:59.003675  205530 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1006 19:55:59.008580  205530 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1006 19:55:59.008602  205530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	W1006 19:55:58.691238  202624 pod_ready.go:104] pod "coredns-66bc5c9577-8k4cq" is not "Ready", error: <nil>
	W1006 19:56:00.700334  202624 pod_ready.go:104] pod "coredns-66bc5c9577-8k4cq" is not "Ready", error: <nil>
	I1006 19:55:59.024844  205530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1006 19:55:59.321161  205530 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1006 19:55:59.321393  205530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:55:59.321552  205530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-997276 minikube.k8s.io/updated_at=2025_10_06T19_55_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81 minikube.k8s.io/name=default-k8s-diff-port-997276 minikube.k8s.io/primary=true
	I1006 19:55:59.519500  205530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:55:59.519565  205530 ops.go:34] apiserver oom_adj: -16
	I1006 19:56:00.023896  205530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:56:00.519646  205530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:56:01.020194  205530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:56:01.520326  205530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:56:02.022595  205530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:56:02.519833  205530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:56:03.026546  205530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:56:03.520113  205530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:56:04.020219  205530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:56:04.520116  205530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:56:04.709526  205530 kubeadm.go:1113] duration metric: took 5.388209939s to wait for elevateKubeSystemPrivileges
	I1006 19:56:04.709560  205530 kubeadm.go:402] duration metric: took 24.030955182s to StartCluster
	I1006 19:56:04.709580  205530 settings.go:142] acquiring lock: {Name:mkaade96c66593c653256adaeb0a3029ca60e0c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:56:04.709666  205530 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:56:04.711677  205530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/kubeconfig: {Name:mkf8cce8f9dace5c4d41967d6ad8df5e03ba53f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:56:04.712256  205530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1006 19:56:04.712322  205530 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 19:56:04.712535  205530 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 19:56:04.712610  205530 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-997276"
	I1006 19:56:04.712623  205530 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-997276"
	I1006 19:56:04.712647  205530 host.go:66] Checking if "default-k8s-diff-port-997276" exists ...
	I1006 19:56:04.712526  205530 config.go:182] Loaded profile config "default-k8s-diff-port-997276": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:56:04.713027  205530 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-997276"
	I1006 19:56:04.713053  205530 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-997276"
	I1006 19:56:04.713096  205530 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997276 --format={{.State.Status}}
	I1006 19:56:04.713439  205530 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997276 --format={{.State.Status}}
	I1006 19:56:04.719760  205530 out.go:179] * Verifying Kubernetes components...
	I1006 19:56:04.723320  205530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:56:04.762024  205530 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-997276"
	I1006 19:56:04.762196  205530 host.go:66] Checking if "default-k8s-diff-port-997276" exists ...
	I1006 19:56:04.762931  205530 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997276 --format={{.State.Status}}
	I1006 19:56:04.788754  205530 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1006 19:56:03.192206  202624 pod_ready.go:104] pod "coredns-66bc5c9577-8k4cq" is not "Ready", error: <nil>
	I1006 19:56:04.191511  202624 pod_ready.go:94] pod "coredns-66bc5c9577-8k4cq" is "Ready"
	I1006 19:56:04.191536  202624 pod_ready.go:86] duration metric: took 32.50558766s for pod "coredns-66bc5c9577-8k4cq" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:04.194202  202624 pod_ready.go:83] waiting for pod "etcd-embed-certs-830393" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:04.199267  202624 pod_ready.go:94] pod "etcd-embed-certs-830393" is "Ready"
	I1006 19:56:04.199294  202624 pod_ready.go:86] duration metric: took 5.063985ms for pod "etcd-embed-certs-830393" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:04.201916  202624 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-830393" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:04.207381  202624 pod_ready.go:94] pod "kube-apiserver-embed-certs-830393" is "Ready"
	I1006 19:56:04.207426  202624 pod_ready.go:86] duration metric: took 5.481437ms for pod "kube-apiserver-embed-certs-830393" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:04.210389  202624 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-830393" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:04.391748  202624 pod_ready.go:94] pod "kube-controller-manager-embed-certs-830393" is "Ready"
	I1006 19:56:04.391780  202624 pod_ready.go:86] duration metric: took 181.362549ms for pod "kube-controller-manager-embed-certs-830393" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:04.590344  202624 pod_ready.go:83] waiting for pod "kube-proxy-xl5tt" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:04.989638  202624 pod_ready.go:94] pod "kube-proxy-xl5tt" is "Ready"
	I1006 19:56:04.989672  202624 pod_ready.go:86] duration metric: took 399.255074ms for pod "kube-proxy-xl5tt" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:05.190176  202624 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-830393" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:05.589734  202624 pod_ready.go:94] pod "kube-scheduler-embed-certs-830393" is "Ready"
	I1006 19:56:05.589769  202624 pod_ready.go:86] duration metric: took 399.564079ms for pod "kube-scheduler-embed-certs-830393" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:05.589782  202624 pod_ready.go:40] duration metric: took 33.907581556s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 19:56:05.691865  202624 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1006 19:56:05.695093  202624 out.go:179] * Done! kubectl is now configured to use "embed-certs-830393" cluster and "default" namespace by default
	I1006 19:56:04.800205  205530 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 19:56:04.800230  205530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 19:56:04.800313  205530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:56:04.808260  205530 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 19:56:04.808279  205530 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 19:56:04.808344  205530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:56:04.839523  205530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:56:04.861412  205530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:56:05.021506  205530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 19:56:05.137871  205530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 19:56:05.206197  205530 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:56:05.206436  205530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1006 19:56:06.077784  205530 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-997276" to be "Ready" ...
	I1006 19:56:06.078020  205530 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1006 19:56:06.082304  205530 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1006 19:56:06.085443  205530 addons.go:514] duration metric: took 1.372885884s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1006 19:56:06.581686  205530 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-997276" context rescaled to 1 replicas
	W1006 19:56:08.080827  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	W1006 19:56:10.581013  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	W1006 19:56:13.081695  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	W1006 19:56:15.082166  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	W1006 19:56:17.581562  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 06 19:56:00 embed-certs-830393 crio[651]: time="2025-10-06T19:56:00.154459433Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=204003fa-9342-4e83-83c5-1ca9caa3149c name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:56:00 embed-certs-830393 crio[651]: time="2025-10-06T19:56:00.155796647Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=5c16859e-8160-4bed-b424-1cc04e673cf2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:56:00 embed-certs-830393 crio[651]: time="2025-10-06T19:56:00.156187621Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:56:00 embed-certs-830393 crio[651]: time="2025-10-06T19:56:00.256834272Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:56:00 embed-certs-830393 crio[651]: time="2025-10-06T19:56:00.257100484Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/319f9ba7e737005b7bb19c1f326585a9647858af4060a0961b26ef78185c44f2/merged/etc/passwd: no such file or directory"
	Oct 06 19:56:00 embed-certs-830393 crio[651]: time="2025-10-06T19:56:00.25713827Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/319f9ba7e737005b7bb19c1f326585a9647858af4060a0961b26ef78185c44f2/merged/etc/group: no such file or directory"
	Oct 06 19:56:00 embed-certs-830393 crio[651]: time="2025-10-06T19:56:00.257530982Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:56:00 embed-certs-830393 crio[651]: time="2025-10-06T19:56:00.376288273Z" level=info msg="Created container f4d2a9fccfdd6f393221414a6c0aa8a77d35201a86d3a04c6be9f91649344122: kube-system/storage-provisioner/storage-provisioner" id=5c16859e-8160-4bed-b424-1cc04e673cf2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:56:00 embed-certs-830393 crio[651]: time="2025-10-06T19:56:00.378126115Z" level=info msg="Starting container: f4d2a9fccfdd6f393221414a6c0aa8a77d35201a86d3a04c6be9f91649344122" id=c8b89383-96da-4aca-aecd-9bc70da6e760 name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 19:56:00 embed-certs-830393 crio[651]: time="2025-10-06T19:56:00.38101411Z" level=info msg="Started container" PID=1644 containerID=f4d2a9fccfdd6f393221414a6c0aa8a77d35201a86d3a04c6be9f91649344122 description=kube-system/storage-provisioner/storage-provisioner id=c8b89383-96da-4aca-aecd-9bc70da6e760 name=/runtime.v1.RuntimeService/StartContainer sandboxID=138c5c6a9e75f2c957c36d7e7c145c6349f4250be57261a3833a12ef1de97899
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.704971719Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.71582191Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.715855872Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.715883138Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.71937715Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.719530343Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.719607637Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.723149281Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.723187493Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.723212379Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.728037288Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.728073432Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.728097079Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.731445259Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.7314795Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f4d2a9fccfdd6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago      Running             storage-provisioner         2                   138c5c6a9e75f       storage-provisioner                          kube-system
	402ce39bb0201       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   5eb0b48f2176a       dashboard-metrics-scraper-6ffb444bf9-rhnrq   kubernetes-dashboard
	02e7281204657       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   34 seconds ago      Running             kubernetes-dashboard        0                   0b55399d2fc79       kubernetes-dashboard-855c9754f9-dg6tb        kubernetes-dashboard
	f683a297d5110       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago      Exited              storage-provisioner         1                   138c5c6a9e75f       storage-provisioner                          kube-system
	e94dede38eab4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           51 seconds ago      Running             kube-proxy                  1                   7f426bcae331d       kube-proxy-xl5tt                             kube-system
	53fa3c45d9d3f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           51 seconds ago      Running             coredns                     1                   d8c1f7f60dd66       coredns-66bc5c9577-8k4cq                     kube-system
	2bfab853c45dc       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   23688320bc7e9       busybox                                      default
	e2adf0d6acf04       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago      Running             kindnet-cni                 1                   5d5fcae9b370f       kindnet-g7jnc                                kube-system
	feaea5c591582       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           59 seconds ago      Running             kube-controller-manager     1                   8a9a8c2203f5c       kube-controller-manager-embed-certs-830393   kube-system
	5aac2d3517c5e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           59 seconds ago      Running             kube-scheduler              1                   74415b04ebeda       kube-scheduler-embed-certs-830393            kube-system
	5d7f3ed046188       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           59 seconds ago      Running             etcd                        1                   f8d0fb46bb3c6       etcd-embed-certs-830393                      kube-system
	1ca5cdfb3593e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           59 seconds ago      Running             kube-apiserver              1                   e0643df874155       kube-apiserver-embed-certs-830393            kube-system
	
	
	==> coredns [53fa3c45d9d3fe4b234e83920d4c40b4da0c696f8345d0661f1282310ea0a883] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43496 - 26265 "HINFO IN 6717465657593695425.5428441928682522278. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00472253s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-830393
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-830393
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81
	                    minikube.k8s.io/name=embed-certs-830393
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_06T19_53_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 06 Oct 2025 19:53:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-830393
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 06 Oct 2025 19:56:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 06 Oct 2025 19:55:59 +0000   Mon, 06 Oct 2025 19:53:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 06 Oct 2025 19:55:59 +0000   Mon, 06 Oct 2025 19:53:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 06 Oct 2025 19:55:59 +0000   Mon, 06 Oct 2025 19:53:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 06 Oct 2025 19:55:59 +0000   Mon, 06 Oct 2025 19:54:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-830393
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7bfe5a7d390a49d3be8226fc84b92394
	  System UUID:                f887c677-54f6-492d-93f4-e65ae4538988
	  Boot ID:                    28123a58-a718-41b5-bac7-da83ff2d3a0d
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-8k4cq                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m17s
	  kube-system                 etcd-embed-certs-830393                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m22s
	  kube-system                 kindnet-g7jnc                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m17s
	  kube-system                 kube-apiserver-embed-certs-830393             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-controller-manager-embed-certs-830393    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-proxy-xl5tt                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-scheduler-embed-certs-830393             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-rhnrq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-dg6tb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m16s                  kube-proxy       
	  Normal   Starting                 50s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m31s (x8 over 2m31s)  kubelet          Node embed-certs-830393 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m31s (x8 over 2m31s)  kubelet          Node embed-certs-830393 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m31s (x8 over 2m31s)  kubelet          Node embed-certs-830393 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m23s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m22s                  kubelet          Node embed-certs-830393 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m22s                  kubelet          Node embed-certs-830393 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m22s                  kubelet          Node embed-certs-830393 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m18s                  node-controller  Node embed-certs-830393 event: Registered Node embed-certs-830393 in Controller
	  Normal   NodeReady                96s                    kubelet          Node embed-certs-830393 status is now: NodeReady
	  Normal   Starting                 61s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s (x8 over 61s)      kubelet          Node embed-certs-830393 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x8 over 61s)      kubelet          Node embed-certs-830393 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x8 over 61s)      kubelet          Node embed-certs-830393 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           49s                    node-controller  Node embed-certs-830393 event: Registered Node embed-certs-830393 in Controller
	
	
	==> dmesg <==
	[Oct 6 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:26] overlayfs: idmapped layers are currently not supported
	[ +26.009516] overlayfs: idmapped layers are currently not supported
	[  +1.884868] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:28] overlayfs: idmapped layers are currently not supported
	[ +38.864395] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:29] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:30] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:31] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:35] overlayfs: idmapped layers are currently not supported
	[ +30.685217] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:37] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:39] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:41] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:48] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:49] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:50] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:52] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:54] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:55] overlayfs: idmapped layers are currently not supported
	[ +29.761517] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5d7f3ed0461886cd83205860bc71296168bd61a981970cd2dc78f60ba117030e] <==
	{"level":"warn","ts":"2025-10-06T19:55:25.969615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.013977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.071925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.092181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.142276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.191120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.244117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.294297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.343913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.377818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.423966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.494194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.543504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.560710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.578028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.618752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.737172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54246","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-06T19:55:29.171588Z","caller":"traceutil/trace.go:172","msg":"trace[129443978] transaction","detail":"{read_only:false; number_of_response:0; response_revision:503; }","duration":"106.839389ms","start":"2025-10-06T19:55:29.064732Z","end":"2025-10-06T19:55:29.171571Z","steps":["trace[129443978] 'process raft request'  (duration: 62.695423ms)","trace[129443978] 'compare'  (duration: 44.018359ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-06T19:55:29.332885Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.98837ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-admin\" limit:1 ","response":"range_response_count:1 size:840"}
	{"level":"info","ts":"2025-10-06T19:55:29.332952Z","caller":"traceutil/trace.go:172","msg":"trace[948330574] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-admin; range_end:; response_count:1; response_revision:507; }","duration":"111.077151ms","start":"2025-10-06T19:55:29.221862Z","end":"2025-10-06T19:55:29.332939Z","steps":["trace[948330574] 'agreement among raft nodes before linearized reading'  (duration: 57.860562ms)","trace[948330574] 'range keys from in-memory index tree'  (duration: 53.056061ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-06T19:55:29.333194Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.21871ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/embed-certs-830393.186bff0f5ddac015\" limit:1 ","response":"range_response_count:1 size:723"}
	{"level":"info","ts":"2025-10-06T19:55:29.333261Z","caller":"traceutil/trace.go:172","msg":"trace[2020034856] transaction","detail":"{read_only:false; response_revision:508; number_of_response:1; }","duration":"124.01386ms","start":"2025-10-06T19:55:29.209237Z","end":"2025-10-06T19:55:29.333251Z","steps":["trace[2020034856] 'process raft request'  (duration: 70.538764ms)","trace[2020034856] 'compare'  (duration: 52.940851ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-06T19:55:29.333586Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"103.366399ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" limit:1 ","response":"range_response_count:1 size:721"}
	{"level":"info","ts":"2025-10-06T19:55:29.333623Z","caller":"traceutil/trace.go:172","msg":"trace[1163279525] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:508; }","duration":"103.405538ms","start":"2025-10-06T19:55:29.230209Z","end":"2025-10-06T19:55:29.333614Z","steps":["trace[1163279525] 'agreement among raft nodes before linearized reading'  (duration: 103.31825ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-06T19:55:29.333280Z","caller":"traceutil/trace.go:172","msg":"trace[2040801756] range","detail":"{range_begin:/registry/events/default/embed-certs-830393.186bff0f5ddac015; range_end:; response_count:1; response_revision:508; }","duration":"107.312956ms","start":"2025-10-06T19:55:29.225956Z","end":"2025-10-06T19:55:29.333269Z","steps":["trace[2040801756] 'agreement among raft nodes before linearized reading'  (duration: 107.148555ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:56:21 up  1:38,  0 user,  load average: 4.10, 2.82, 2.10
	Linux embed-certs-830393 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e2adf0d6acf04da953a1454524da4257845c1f019a6c67dbf5feef047497298e] <==
	I1006 19:55:29.485884       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1006 19:55:29.493176       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1006 19:55:29.493337       1 main.go:148] setting mtu 1500 for CNI 
	I1006 19:55:29.493350       1 main.go:178] kindnetd IP family: "ipv4"
	I1006 19:55:29.493361       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-06T19:55:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1006 19:55:29.746873       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1006 19:55:29.746912       1 controller.go:381] "Waiting for informer caches to sync"
	I1006 19:55:29.746980       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1006 19:55:29.747112       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1006 19:55:59.703158       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1006 19:55:59.708772       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1006 19:55:59.708779       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1006 19:55:59.708959       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1006 19:56:01.047280       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1006 19:56:01.047394       1 metrics.go:72] Registering metrics
	I1006 19:56:01.047501       1 controller.go:711] "Syncing nftables rules"
	I1006 19:56:09.704654       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1006 19:56:09.704741       1 main.go:301] handling current node
	I1006 19:56:19.707773       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1006 19:56:19.707808       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1ca5cdfb3593e4dad733094500a9fe2b21c50f4f50a60370c3764e89baad98ac] <==
	I1006 19:55:28.544912       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1006 19:55:28.556482       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1006 19:55:28.559360       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1006 19:55:28.627566       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1006 19:55:28.627609       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1006 19:55:28.627662       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1006 19:55:28.637304       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1006 19:55:28.647993       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1006 19:55:28.648741       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1006 19:55:28.650326       1 aggregator.go:171] initial CRD sync complete...
	I1006 19:55:28.650341       1 autoregister_controller.go:144] Starting autoregister controller
	I1006 19:55:28.650348       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1006 19:55:28.650354       1 cache.go:39] Caches are synced for autoregister controller
	I1006 19:55:28.660863       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1006 19:55:28.834387       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1006 19:55:28.857603       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1006 19:55:29.937253       1 controller.go:667] quota admission added evaluator for: namespaces
	I1006 19:55:30.326863       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1006 19:55:30.478449       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1006 19:55:30.536575       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1006 19:55:30.774019       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.102.84"}
	I1006 19:55:30.907406       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.211.64"}
	I1006 19:55:33.124886       1 controller.go:667] quota admission added evaluator for: endpoints
	I1006 19:55:33.377264       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1006 19:55:33.474237       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [feaea5c59158221450ddfd0bf1e25c002e959fd1e69f33f065cfebc4fa3ff1dc] <==
	I1006 19:55:32.929202       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1006 19:55:32.931487       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1006 19:55:32.941159       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1006 19:55:32.941276       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1006 19:55:32.947410       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1006 19:55:32.950666       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1006 19:55:32.953551       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1006 19:55:32.957509       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1006 19:55:32.959689       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1006 19:55:32.966448       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1006 19:55:32.967883       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1006 19:55:32.968055       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1006 19:55:32.968105       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1006 19:55:32.968522       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1006 19:55:32.968596       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1006 19:55:32.968785       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1006 19:55:32.968994       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-830393"
	I1006 19:55:32.969231       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1006 19:55:32.968838       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1006 19:55:32.971428       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 19:55:32.985336       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1006 19:55:32.987669       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1006 19:55:32.999268       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 19:55:32.999370       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1006 19:55:32.999402       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [e94dede38eab415b5d932275312bb9c5abecf5f78916a03894c7a79d42672cd6] <==
	I1006 19:55:29.806125       1 server_linux.go:53] "Using iptables proxy"
	I1006 19:55:30.070426       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1006 19:55:30.181412       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1006 19:55:30.191095       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1006 19:55:30.191314       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1006 19:55:30.514631       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 19:55:30.514702       1 server_linux.go:132] "Using iptables Proxier"
	I1006 19:55:30.589618       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1006 19:55:30.590087       1 server.go:527] "Version info" version="v1.34.1"
	I1006 19:55:30.590336       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:55:30.591960       1 config.go:200] "Starting service config controller"
	I1006 19:55:30.597494       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1006 19:55:30.597614       1 config.go:106] "Starting endpoint slice config controller"
	I1006 19:55:30.597621       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1006 19:55:30.597671       1 config.go:403] "Starting serviceCIDR config controller"
	I1006 19:55:30.597687       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1006 19:55:30.622617       1 config.go:309] "Starting node config controller"
	I1006 19:55:30.623540       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1006 19:55:30.623632       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1006 19:55:30.697772       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1006 19:55:30.697805       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1006 19:55:30.698548       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5aac2d3517c5e661bd5a3fa05304330ba6bdf4006024e9d2204b7537e85be506] <==
	I1006 19:55:25.959557       1 serving.go:386] Generated self-signed cert in-memory
	I1006 19:55:30.044273       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1006 19:55:30.044402       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:55:30.117338       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1006 19:55:30.126581       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1006 19:55:30.126639       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1006 19:55:30.126850       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1006 19:55:30.126660       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:55:30.129127       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:55:30.126672       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 19:55:30.129192       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 19:55:30.226887       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1006 19:55:30.229444       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 19:55:30.229531       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 06 19:55:33 embed-certs-830393 kubelet[777]: I1006 19:55:33.629298     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lggcq\" (UniqueName: \"kubernetes.io/projected/8d3356fd-da8c-4b19-9b5c-acf2329fb3d9-kube-api-access-lggcq\") pod \"kubernetes-dashboard-855c9754f9-dg6tb\" (UID: \"8d3356fd-da8c-4b19-9b5c-acf2329fb3d9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dg6tb"
	Oct 06 19:55:33 embed-certs-830393 kubelet[777]: I1006 19:55:33.629397     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d6451a59-b052-443e-ae51-67be77436167-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-rhnrq\" (UID: \"d6451a59-b052-443e-ae51-67be77436167\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhnrq"
	Oct 06 19:55:33 embed-certs-830393 kubelet[777]: I1006 19:55:33.629474     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8d3356fd-da8c-4b19-9b5c-acf2329fb3d9-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-dg6tb\" (UID: \"8d3356fd-da8c-4b19-9b5c-acf2329fb3d9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dg6tb"
	Oct 06 19:55:33 embed-certs-830393 kubelet[777]: I1006 19:55:33.683361     777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 06 19:55:33 embed-certs-830393 kubelet[777]: W1006 19:55:33.920022     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/db0504489522a7f76847d7ee94754fe876f97596d0aefbe180d9ae204c670a0a/crio-5eb0b48f2176a14c2af0828957d1dfdd28a51b00d8f1ce18f2fa4b33b84015b8 WatchSource:0}: Error finding container 5eb0b48f2176a14c2af0828957d1dfdd28a51b00d8f1ce18f2fa4b33b84015b8: Status 404 returned error can't find the container with id 5eb0b48f2176a14c2af0828957d1dfdd28a51b00d8f1ce18f2fa4b33b84015b8
	Oct 06 19:55:33 embed-certs-830393 kubelet[777]: W1006 19:55:33.940588     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/db0504489522a7f76847d7ee94754fe876f97596d0aefbe180d9ae204c670a0a/crio-0b55399d2fc7987a92e9f4012e8405bc2c4a6070c29eaaf8c5227aab29beac1b WatchSource:0}: Error finding container 0b55399d2fc7987a92e9f4012e8405bc2c4a6070c29eaaf8c5227aab29beac1b: Status 404 returned error can't find the container with id 0b55399d2fc7987a92e9f4012e8405bc2c4a6070c29eaaf8c5227aab29beac1b
	Oct 06 19:55:40 embed-certs-830393 kubelet[777]: I1006 19:55:40.011150     777 scope.go:117] "RemoveContainer" containerID="479fc7e6248e67e1b0cadaa08f93446279e02a7ffe2ac958e06c00bfa5b00883"
	Oct 06 19:55:41 embed-certs-830393 kubelet[777]: I1006 19:55:41.007735     777 scope.go:117] "RemoveContainer" containerID="479fc7e6248e67e1b0cadaa08f93446279e02a7ffe2ac958e06c00bfa5b00883"
	Oct 06 19:55:41 embed-certs-830393 kubelet[777]: I1006 19:55:41.008055     777 scope.go:117] "RemoveContainer" containerID="f6489245180f1ae178f1034abef0b8e9d237b61c350f58f1beb9e66e1a7c9fe3"
	Oct 06 19:55:41 embed-certs-830393 kubelet[777]: E1006 19:55:41.008207     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rhnrq_kubernetes-dashboard(d6451a59-b052-443e-ae51-67be77436167)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhnrq" podUID="d6451a59-b052-443e-ae51-67be77436167"
	Oct 06 19:55:42 embed-certs-830393 kubelet[777]: I1006 19:55:42.013154     777 scope.go:117] "RemoveContainer" containerID="f6489245180f1ae178f1034abef0b8e9d237b61c350f58f1beb9e66e1a7c9fe3"
	Oct 06 19:55:42 embed-certs-830393 kubelet[777]: E1006 19:55:42.013341     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rhnrq_kubernetes-dashboard(d6451a59-b052-443e-ae51-67be77436167)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhnrq" podUID="d6451a59-b052-443e-ae51-67be77436167"
	Oct 06 19:55:43 embed-certs-830393 kubelet[777]: I1006 19:55:43.891299     777 scope.go:117] "RemoveContainer" containerID="f6489245180f1ae178f1034abef0b8e9d237b61c350f58f1beb9e66e1a7c9fe3"
	Oct 06 19:55:43 embed-certs-830393 kubelet[777]: E1006 19:55:43.896210     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rhnrq_kubernetes-dashboard(d6451a59-b052-443e-ae51-67be77436167)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhnrq" podUID="d6451a59-b052-443e-ae51-67be77436167"
	Oct 06 19:55:54 embed-certs-830393 kubelet[777]: I1006 19:55:54.750020     777 scope.go:117] "RemoveContainer" containerID="f6489245180f1ae178f1034abef0b8e9d237b61c350f58f1beb9e66e1a7c9fe3"
	Oct 06 19:55:55 embed-certs-830393 kubelet[777]: I1006 19:55:55.055980     777 scope.go:117] "RemoveContainer" containerID="f6489245180f1ae178f1034abef0b8e9d237b61c350f58f1beb9e66e1a7c9fe3"
	Oct 06 19:55:55 embed-certs-830393 kubelet[777]: I1006 19:55:55.056417     777 scope.go:117] "RemoveContainer" containerID="402ce39bb02015c167602ffe294d21c2474937b76810fa99052cb19af2dbce0b"
	Oct 06 19:55:55 embed-certs-830393 kubelet[777]: E1006 19:55:55.056597     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rhnrq_kubernetes-dashboard(d6451a59-b052-443e-ae51-67be77436167)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhnrq" podUID="d6451a59-b052-443e-ae51-67be77436167"
	Oct 06 19:55:55 embed-certs-830393 kubelet[777]: I1006 19:55:55.086122     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dg6tb" podStartSLOduration=9.889446733 podStartE2EDuration="22.086103232s" podCreationTimestamp="2025-10-06 19:55:33 +0000 UTC" firstStartedPulling="2025-10-06 19:55:33.943273182 +0000 UTC m=+13.467695932" lastFinishedPulling="2025-10-06 19:55:46.139929681 +0000 UTC m=+25.664352431" observedRunningTime="2025-10-06 19:55:47.045972572 +0000 UTC m=+26.570395339" watchObservedRunningTime="2025-10-06 19:55:55.086103232 +0000 UTC m=+34.610525990"
	Oct 06 19:56:00 embed-certs-830393 kubelet[777]: I1006 19:56:00.129668     777 scope.go:117] "RemoveContainer" containerID="f683a297d5110be5e2db1398d5d9f375e296e6251a59736d916eadc918fcd276"
	Oct 06 19:56:03 embed-certs-830393 kubelet[777]: I1006 19:56:03.891993     777 scope.go:117] "RemoveContainer" containerID="402ce39bb02015c167602ffe294d21c2474937b76810fa99052cb19af2dbce0b"
	Oct 06 19:56:03 embed-certs-830393 kubelet[777]: E1006 19:56:03.892647     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rhnrq_kubernetes-dashboard(d6451a59-b052-443e-ae51-67be77436167)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhnrq" podUID="d6451a59-b052-443e-ae51-67be77436167"
	Oct 06 19:56:18 embed-certs-830393 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 06 19:56:18 embed-certs-830393 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 06 19:56:18 embed-certs-830393 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [02e72812046572c20424f4bde19f4301ef6c72f7c0f044b1a3c716bcfe7a2943] <==
	2025/10/06 19:55:46 Using namespace: kubernetes-dashboard
	2025/10/06 19:55:46 Using in-cluster config to connect to apiserver
	2025/10/06 19:55:46 Using secret token for csrf signing
	2025/10/06 19:55:46 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/06 19:55:46 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/06 19:55:46 Successful initial request to the apiserver, version: v1.34.1
	2025/10/06 19:55:46 Generating JWE encryption key
	2025/10/06 19:55:46 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/06 19:55:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/06 19:55:46 Initializing JWE encryption key from synchronized object
	2025/10/06 19:55:46 Creating in-cluster Sidecar client
	2025/10/06 19:55:46 Serving insecurely on HTTP port: 9090
	2025/10/06 19:55:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/06 19:56:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/06 19:55:46 Starting overwatch
	
	
	==> storage-provisioner [f4d2a9fccfdd6f393221414a6c0aa8a77d35201a86d3a04c6be9f91649344122] <==
	I1006 19:56:00.425812       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1006 19:56:00.501718       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1006 19:56:00.501775       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1006 19:56:00.505696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:03.971866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:08.232526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:11.830908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:14.883882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:17.906235       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:17.912121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1006 19:56:17.912272       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1006 19:56:17.912343       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a38e7b44-d976-40ca-8b2f-247b56a0f9cb", APIVersion:"v1", ResourceVersion:"682", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-830393_b74e118c-ce89-4499-84fa-817dad0addca became leader
	I1006 19:56:17.912434       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-830393_b74e118c-ce89-4499-84fa-817dad0addca!
	W1006 19:56:17.915946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:17.928149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1006 19:56:18.016135       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-830393_b74e118c-ce89-4499-84fa-817dad0addca!
	W1006 19:56:19.931055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:19.935832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f683a297d5110be5e2db1398d5d9f375e296e6251a59736d916eadc918fcd276] <==
	I1006 19:55:29.885542       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1006 19:55:59.900974       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-830393 -n embed-certs-830393
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-830393 -n embed-certs-830393: exit status 2 (376.844259ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-830393 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-830393
helpers_test.go:243: (dbg) docker inspect embed-certs-830393:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "db0504489522a7f76847d7ee94754fe876f97596d0aefbe180d9ae204c670a0a",
	        "Created": "2025-10-06T19:53:31.962897615Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 202803,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T19:55:13.076516701Z",
	            "FinishedAt": "2025-10-06T19:55:12.118758111Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/db0504489522a7f76847d7ee94754fe876f97596d0aefbe180d9ae204c670a0a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/db0504489522a7f76847d7ee94754fe876f97596d0aefbe180d9ae204c670a0a/hostname",
	        "HostsPath": "/var/lib/docker/containers/db0504489522a7f76847d7ee94754fe876f97596d0aefbe180d9ae204c670a0a/hosts",
	        "LogPath": "/var/lib/docker/containers/db0504489522a7f76847d7ee94754fe876f97596d0aefbe180d9ae204c670a0a/db0504489522a7f76847d7ee94754fe876f97596d0aefbe180d9ae204c670a0a-json.log",
	        "Name": "/embed-certs-830393",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-830393:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-830393",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "db0504489522a7f76847d7ee94754fe876f97596d0aefbe180d9ae204c670a0a",
	                "LowerDir": "/var/lib/docker/overlay2/bd523f7fa06a7e0ad89b9a8ced25801bcb8b1dd5b3445e1afcebc1a259cb4596-init/diff:/var/lib/docker/overlay2/950dad8a7486c25883a9845454dded3949e8f3a53166d005e6749ff210bcab80/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bd523f7fa06a7e0ad89b9a8ced25801bcb8b1dd5b3445e1afcebc1a259cb4596/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bd523f7fa06a7e0ad89b9a8ced25801bcb8b1dd5b3445e1afcebc1a259cb4596/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bd523f7fa06a7e0ad89b9a8ced25801bcb8b1dd5b3445e1afcebc1a259cb4596/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-830393",
	                "Source": "/var/lib/docker/volumes/embed-certs-830393/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-830393",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-830393",
	                "name.minikube.sigs.k8s.io": "embed-certs-830393",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0c79102a749b776d621309ccb77b4cde76ba9fb6e7fcd479ce2a2d5384efbc26",
	            "SandboxKey": "/var/run/docker/netns/0c79102a749b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-830393": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:2a:d3:82:4e:91",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1800026322b057a83604241d8aa91bc0c8c07713c3ce5f5e76ba25af81a1e332",
	                    "EndpointID": "eeb0e0b03af579e23ecaefd123c161766c30e9c4815a2122317c1401a492670e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-830393",
	                        "db0504489522"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-830393 -n embed-certs-830393
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-830393 -n embed-certs-830393: exit status 2 (345.657854ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-830393 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-830393 logs -n 25: (1.266528178s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-100545 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-100545       │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │ 06 Oct 25 19:51 UTC │
	│ start   │ -p old-k8s-version-100545 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-100545       │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │ 06 Oct 25 19:52 UTC │
	│ start   │ -p cert-expiration-585086 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-585086       │ jenkins │ v1.37.0 │ 06 Oct 25 19:51 UTC │ 06 Oct 25 19:53 UTC │
	│ image   │ old-k8s-version-100545 image list --format=json                                                                                                                                                                                               │ old-k8s-version-100545       │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:52 UTC │
	│ pause   │ -p old-k8s-version-100545 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-100545       │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │                     │
	│ delete  │ -p old-k8s-version-100545                                                                                                                                                                                                                     │ old-k8s-version-100545       │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:52 UTC │
	│ delete  │ -p old-k8s-version-100545                                                                                                                                                                                                                     │ old-k8s-version-100545       │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:52 UTC │
	│ start   │ -p no-preload-314275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:53 UTC │
	│ delete  │ -p cert-expiration-585086                                                                                                                                                                                                                     │ cert-expiration-585086       │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │ 06 Oct 25 19:53 UTC │
	│ start   │ -p embed-certs-830393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │ 06 Oct 25 19:54 UTC │
	│ addons  │ enable metrics-server -p no-preload-314275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │                     │
	│ stop    │ -p no-preload-314275 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │ 06 Oct 25 19:54 UTC │
	│ addons  │ enable dashboard -p no-preload-314275 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:54 UTC │ 06 Oct 25 19:54 UTC │
	│ start   │ -p no-preload-314275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:54 UTC │ 06 Oct 25 19:55 UTC │
	│ addons  │ enable metrics-server -p embed-certs-830393 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:54 UTC │                     │
	│ stop    │ -p embed-certs-830393 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ addons  │ enable dashboard -p embed-certs-830393 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ pause   │ -p no-preload-314275 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │                     │
	│ start   │ -p embed-certs-830393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:56 UTC │
	│ delete  │ -p no-preload-314275                                                                                                                                                                                                                          │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ delete  │ -p no-preload-314275                                                                                                                                                                                                                          │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ delete  │ -p disable-driver-mounts-932453                                                                                                                                                                                                               │ disable-driver-mounts-932453 │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ start   │ -p default-k8s-diff-port-997276 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │                     │
	│ image   │ embed-certs-830393 image list --format=json                                                                                                                                                                                                   │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │ 06 Oct 25 19:56 UTC │
	│ pause   │ -p embed-certs-830393 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 19:55:24
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 19:55:24.010489  205530 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:55:24.010743  205530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:55:24.010778  205530 out.go:374] Setting ErrFile to fd 2...
	I1006 19:55:24.010800  205530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:55:24.011093  205530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:55:24.011562  205530 out.go:368] Setting JSON to false
	I1006 19:55:24.012571  205530 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5859,"bootTime":1759774665,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1006 19:55:24.012673  205530 start.go:140] virtualization:  
	I1006 19:55:24.016561  205530 out.go:179] * [default-k8s-diff-port-997276] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 19:55:24.020767  205530 notify.go:220] Checking for updates...
	I1006 19:55:24.024229  205530 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 19:55:24.027381  205530 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 19:55:24.030362  205530 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:55:24.033387  205530 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	I1006 19:55:24.036239  205530 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 19:55:24.039074  205530 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 19:55:24.042544  205530 config.go:182] Loaded profile config "embed-certs-830393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:55:24.042713  205530 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 19:55:24.110328  205530 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 19:55:24.110461  205530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:55:24.213486  205530 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-06 19:55:24.2001581 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:
/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:55:24.213597  205530 docker.go:318] overlay module found
	I1006 19:55:24.216955  205530 out.go:179] * Using the docker driver based on user configuration
	I1006 19:55:24.219889  205530 start.go:304] selected driver: docker
	I1006 19:55:24.219906  205530 start.go:924] validating driver "docker" against <nil>
	I1006 19:55:24.219919  205530 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 19:55:24.220663  205530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:55:24.321463  205530 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-06 19:55:24.306531279 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:55:24.321630  205530 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 19:55:24.321869  205530 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 19:55:24.324909  205530 out.go:179] * Using Docker driver with root privileges
	I1006 19:55:24.327756  205530 cni.go:84] Creating CNI manager for ""
	I1006 19:55:24.327825  205530 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:55:24.327837  205530 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 19:55:24.327907  205530 start.go:348] cluster config:
	{Name:default-k8s-diff-port-997276 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-997276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:55:24.331041  205530 out.go:179] * Starting "default-k8s-diff-port-997276" primary control-plane node in "default-k8s-diff-port-997276" cluster
	I1006 19:55:24.333882  205530 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 19:55:24.336763  205530 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 19:55:24.339607  205530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:55:24.339660  205530 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1006 19:55:24.339670  205530 cache.go:58] Caching tarball of preloaded images
	I1006 19:55:24.339794  205530 preload.go:233] Found /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1006 19:55:24.339804  205530 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 19:55:24.339920  205530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/config.json ...
	I1006 19:55:24.339939  205530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/config.json: {Name:mkd5ce4a3412eea9ac3e4f2f74bfe10ec01e14ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:55:24.340102  205530 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 19:55:24.372782  205530 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 19:55:24.372802  205530 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 19:55:24.372824  205530 cache.go:232] Successfully downloaded all kic artifacts
	I1006 19:55:24.372849  205530 start.go:360] acquireMachinesLock for default-k8s-diff-port-997276: {Name:mk7b25a356bfff93cc3ef03a69dea8b7e852b578 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 19:55:24.372962  205530 start.go:364] duration metric: took 95.214µs to acquireMachinesLock for "default-k8s-diff-port-997276"
	I1006 19:55:24.372992  205530 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-997276 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-997276 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 19:55:24.373063  205530 start.go:125] createHost starting for "" (driver="docker")
	I1006 19:55:22.831989  202624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1006 19:55:24.376616  205530 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1006 19:55:24.376848  205530 start.go:159] libmachine.API.Create for "default-k8s-diff-port-997276" (driver="docker")
	I1006 19:55:24.376887  205530 client.go:168] LocalClient.Create starting
	I1006 19:55:24.376976  205530 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem
	I1006 19:55:24.377013  205530 main.go:141] libmachine: Decoding PEM data...
	I1006 19:55:24.377026  205530 main.go:141] libmachine: Parsing certificate...
	I1006 19:55:24.377079  205530 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem
	I1006 19:55:24.377096  205530 main.go:141] libmachine: Decoding PEM data...
	I1006 19:55:24.377107  205530 main.go:141] libmachine: Parsing certificate...
	I1006 19:55:24.377468  205530 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-997276 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 19:55:24.404420  205530 cli_runner.go:211] docker network inspect default-k8s-diff-port-997276 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 19:55:24.404544  205530 network_create.go:284] running [docker network inspect default-k8s-diff-port-997276] to gather additional debugging logs...
	I1006 19:55:24.404569  205530 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-997276
	W1006 19:55:24.445564  205530 cli_runner.go:211] docker network inspect default-k8s-diff-port-997276 returned with exit code 1
	I1006 19:55:24.445591  205530 network_create.go:287] error running [docker network inspect default-k8s-diff-port-997276]: docker network inspect default-k8s-diff-port-997276: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-997276 not found
	I1006 19:55:24.445616  205530 network_create.go:289] output of [docker network inspect default-k8s-diff-port-997276]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-997276 not found
	
	** /stderr **
	I1006 19:55:24.445715  205530 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:55:24.470853  205530 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7058eae896da IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:57:c0:bd:de:1b} reservation:<nil>}
	I1006 19:55:24.471156  205530 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e321ce8cf6dc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:12:e6:76:87:fe:bb} reservation:<nil>}
	I1006 19:55:24.471475  205530 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-17bdbd7bd7be IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:22:93:f3:19:55:10} reservation:<nil>}
	I1006 19:55:24.471918  205530 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c7860}
	I1006 19:55:24.471936  205530 network_create.go:124] attempt to create docker network default-k8s-diff-port-997276 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1006 19:55:24.471996  205530 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-997276 default-k8s-diff-port-997276
	I1006 19:55:24.546512  205530 network_create.go:108] docker network default-k8s-diff-port-997276 192.168.76.0/24 created
	I1006 19:55:24.546540  205530 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-997276" container
	I1006 19:55:24.546609  205530 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 19:55:24.580477  205530 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-997276 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-997276 --label created_by.minikube.sigs.k8s.io=true
	I1006 19:55:24.609503  205530 oci.go:103] Successfully created a docker volume default-k8s-diff-port-997276
	I1006 19:55:24.609598  205530 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-997276-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-997276 --entrypoint /usr/bin/test -v default-k8s-diff-port-997276:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 19:55:25.350514  205530 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-997276
	I1006 19:55:25.350558  205530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:55:25.350577  205530 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 19:55:25.350642  205530 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-997276:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 19:55:28.473231  202624 node_ready.go:49] node "embed-certs-830393" is "Ready"
	I1006 19:55:28.473259  202624 node_ready.go:38] duration metric: took 6.448446215s for node "embed-certs-830393" to be "Ready" ...
	I1006 19:55:28.473274  202624 api_server.go:52] waiting for apiserver process to appear ...
	I1006 19:55:28.473327  202624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 19:55:31.140861  202624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.962086933s)
	I1006 19:55:31.140919  202624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.825069701s)
	I1006 19:55:31.141289  202624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.309263343s)
	I1006 19:55:31.141988  202624 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.668647106s)
	I1006 19:55:31.142006  202624 api_server.go:72] duration metric: took 9.648374145s to wait for apiserver process to appear ...
	I1006 19:55:31.142013  202624 api_server.go:88] waiting for apiserver healthz status ...
	I1006 19:55:31.142028  202624 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1006 19:55:31.145379  202624 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-830393 addons enable metrics-server
	
	I1006 19:55:31.190762  202624 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1006 19:55:31.190789  202624 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1006 19:55:31.241284  202624 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1006 19:55:31.244321  202624 addons.go:514] duration metric: took 9.750265425s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1006 19:55:31.642525  202624 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1006 19:55:31.650672  202624 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1006 19:55:31.651858  202624 api_server.go:141] control plane version: v1.34.1
	I1006 19:55:31.651888  202624 api_server.go:131] duration metric: took 509.868862ms to wait for apiserver health ...
	I1006 19:55:31.651898  202624 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 19:55:31.656078  202624 system_pods.go:59] 8 kube-system pods found
	I1006 19:55:31.656113  202624 system_pods.go:61] "coredns-66bc5c9577-8k4cq" [e6b9c4d9-313c-467e-b448-9867361a42fb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:55:31.656125  202624 system_pods.go:61] "etcd-embed-certs-830393" [94302cea-26c9-4ff5-bcff-ef3903324bfe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 19:55:31.656131  202624 system_pods.go:61] "kindnet-g7jnc" [4f869226-920a-4722-aa82-308466e32e59] Running
	I1006 19:55:31.656141  202624 system_pods.go:61] "kube-apiserver-embed-certs-830393" [7c31daf2-0bbe-4d0d-a233-79da10ee31b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 19:55:31.656151  202624 system_pods.go:61] "kube-controller-manager-embed-certs-830393" [5cdae168-9643-4ada-ac16-35f8f337d1fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 19:55:31.656163  202624 system_pods.go:61] "kube-proxy-xl5tt" [75361417-428d-4ea5-89ad-4570024b8916] Running
	I1006 19:55:31.656173  202624 system_pods.go:61] "kube-scheduler-embed-certs-830393" [be02b2b3-c2c3-458c-8ec2-3d6e2b334427] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 19:55:31.656179  202624 system_pods.go:61] "storage-provisioner" [c6173113-55a1-44a2-b622-34ba6868ea4c] Running
	I1006 19:55:31.656186  202624 system_pods.go:74] duration metric: took 4.282473ms to wait for pod list to return data ...
	I1006 19:55:31.656195  202624 default_sa.go:34] waiting for default service account to be created ...
	I1006 19:55:31.658343  202624 default_sa.go:45] found service account: "default"
	I1006 19:55:31.658365  202624 default_sa.go:55] duration metric: took 2.163904ms for default service account to be created ...
	I1006 19:55:31.658383  202624 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 19:55:31.661623  202624 system_pods.go:86] 8 kube-system pods found
	I1006 19:55:31.661657  202624 system_pods.go:89] "coredns-66bc5c9577-8k4cq" [e6b9c4d9-313c-467e-b448-9867361a42fb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:55:31.661666  202624 system_pods.go:89] "etcd-embed-certs-830393" [94302cea-26c9-4ff5-bcff-ef3903324bfe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 19:55:31.661675  202624 system_pods.go:89] "kindnet-g7jnc" [4f869226-920a-4722-aa82-308466e32e59] Running
	I1006 19:55:31.661683  202624 system_pods.go:89] "kube-apiserver-embed-certs-830393" [7c31daf2-0bbe-4d0d-a233-79da10ee31b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 19:55:31.661689  202624 system_pods.go:89] "kube-controller-manager-embed-certs-830393" [5cdae168-9643-4ada-ac16-35f8f337d1fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 19:55:31.661694  202624 system_pods.go:89] "kube-proxy-xl5tt" [75361417-428d-4ea5-89ad-4570024b8916] Running
	I1006 19:55:31.661701  202624 system_pods.go:89] "kube-scheduler-embed-certs-830393" [be02b2b3-c2c3-458c-8ec2-3d6e2b334427] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 19:55:31.661709  202624 system_pods.go:89] "storage-provisioner" [c6173113-55a1-44a2-b622-34ba6868ea4c] Running
	I1006 19:55:31.661717  202624 system_pods.go:126] duration metric: took 3.328077ms to wait for k8s-apps to be running ...
	I1006 19:55:31.661731  202624 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 19:55:31.661793  202624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:55:31.675095  202624 system_svc.go:56] duration metric: took 13.356417ms WaitForService to wait for kubelet
	I1006 19:55:31.675124  202624 kubeadm.go:586] duration metric: took 10.181489333s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 19:55:31.675143  202624 node_conditions.go:102] verifying NodePressure condition ...
	I1006 19:55:31.678230  202624 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 19:55:31.678263  202624 node_conditions.go:123] node cpu capacity is 2
	I1006 19:55:31.678276  202624 node_conditions.go:105] duration metric: took 3.108471ms to run NodePressure ...
	I1006 19:55:31.678288  202624 start.go:241] waiting for startup goroutines ...
	I1006 19:55:31.678295  202624 start.go:246] waiting for cluster config update ...
	I1006 19:55:31.678307  202624 start.go:255] writing updated cluster config ...
	I1006 19:55:31.678603  202624 ssh_runner.go:195] Run: rm -f paused
	I1006 19:55:31.682169  202624 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 19:55:31.685921  202624 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8k4cq" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:55:30.173219  205530 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-997276:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.822539425s)
	I1006 19:55:30.173247  205530 kic.go:203] duration metric: took 4.822666796s to extract preloaded images to volume ...
	W1006 19:55:30.173400  205530 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1006 19:55:30.173505  205530 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 19:55:30.308102  205530 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-997276 --name default-k8s-diff-port-997276 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-997276 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-997276 --network default-k8s-diff-port-997276 --ip 192.168.76.2 --volume default-k8s-diff-port-997276:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 19:55:30.701433  205530 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997276 --format={{.State.Running}}
	I1006 19:55:30.729527  205530 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997276 --format={{.State.Status}}
	I1006 19:55:30.761882  205530 cli_runner.go:164] Run: docker exec default-k8s-diff-port-997276 stat /var/lib/dpkg/alternatives/iptables
	I1006 19:55:30.828255  205530 oci.go:144] the created container "default-k8s-diff-port-997276" has a running status.
	I1006 19:55:30.828290  205530 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa...
	I1006 19:55:31.050700  205530 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 19:55:31.075268  205530 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997276 --format={{.State.Status}}
	I1006 19:55:31.098858  205530 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 19:55:31.098885  205530 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-997276 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 19:55:31.169635  205530 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997276 --format={{.State.Status}}
	I1006 19:55:31.195559  205530 machine.go:93] provisionDockerMachine start ...
	I1006 19:55:31.195661  205530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:55:31.221188  205530 main.go:141] libmachine: Using SSH client type: native
	I1006 19:55:31.221519  205530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33080 <nil> <nil>}
	I1006 19:55:31.221528  205530 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 19:55:31.222122  205530 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	W1006 19:55:33.695245  202624 pod_ready.go:104] pod "coredns-66bc5c9577-8k4cq" is not "Ready", error: <nil>
	W1006 19:55:36.191457  202624 pod_ready.go:104] pod "coredns-66bc5c9577-8k4cq" is not "Ready", error: <nil>
	I1006 19:55:34.367556  205530 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-997276
	
	I1006 19:55:34.367590  205530 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-997276"
	I1006 19:55:34.367656  205530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:55:34.390836  205530 main.go:141] libmachine: Using SSH client type: native
	I1006 19:55:34.391147  205530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33080 <nil> <nil>}
	I1006 19:55:34.391169  205530 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-997276 && echo "default-k8s-diff-port-997276" | sudo tee /etc/hostname
	I1006 19:55:34.549886  205530 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-997276
	
	I1006 19:55:34.550002  205530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:55:34.573291  205530 main.go:141] libmachine: Using SSH client type: native
	I1006 19:55:34.573617  205530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33080 <nil> <nil>}
	I1006 19:55:34.573640  205530 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-997276' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-997276/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-997276' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 19:55:34.724422  205530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 19:55:34.724500  205530 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-2540/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-2540/.minikube}
	I1006 19:55:34.724535  205530 ubuntu.go:190] setting up certificates
	I1006 19:55:34.724571  205530 provision.go:84] configureAuth start
	I1006 19:55:34.724673  205530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-997276
	I1006 19:55:34.748293  205530 provision.go:143] copyHostCerts
	I1006 19:55:34.748353  205530 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem, removing ...
	I1006 19:55:34.748361  205530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem
	I1006 19:55:34.748431  205530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem (1675 bytes)
	I1006 19:55:34.748518  205530 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem, removing ...
	I1006 19:55:34.748524  205530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem
	I1006 19:55:34.748549  205530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem (1082 bytes)
	I1006 19:55:34.748596  205530 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem, removing ...
	I1006 19:55:34.748605  205530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem
	I1006 19:55:34.748628  205530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem (1123 bytes)
	I1006 19:55:34.748686  205530 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-997276 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-997276 localhost minikube]
	I1006 19:55:35.236098  205530 provision.go:177] copyRemoteCerts
	I1006 19:55:35.236217  205530 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 19:55:35.236296  205530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:55:35.255957  205530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:55:35.368629  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 19:55:35.404243  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 19:55:35.429211  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1006 19:55:35.450593  205530 provision.go:87] duration metric: took 725.982028ms to configureAuth
	I1006 19:55:35.450704  205530 ubuntu.go:206] setting minikube options for container-runtime
	I1006 19:55:35.450934  205530 config.go:182] Loaded profile config "default-k8s-diff-port-997276": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:55:35.451074  205530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:55:35.472704  205530 main.go:141] libmachine: Using SSH client type: native
	I1006 19:55:35.473002  205530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33080 <nil> <nil>}
	I1006 19:55:35.473017  205530 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 19:55:35.896747  205530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 19:55:35.896768  205530 machine.go:96] duration metric: took 4.701180081s to provisionDockerMachine
	I1006 19:55:35.896778  205530 client.go:171] duration metric: took 11.519885708s to LocalClient.Create
	I1006 19:55:35.896799  205530 start.go:167] duration metric: took 11.519952204s to libmachine.API.Create "default-k8s-diff-port-997276"
	I1006 19:55:35.896806  205530 start.go:293] postStartSetup for "default-k8s-diff-port-997276" (driver="docker")
	I1006 19:55:35.896816  205530 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 19:55:35.896896  205530 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 19:55:35.896937  205530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:55:35.924989  205530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:55:36.037865  205530 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 19:55:36.042141  205530 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 19:55:36.042174  205530 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 19:55:36.042185  205530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/addons for local assets ...
	I1006 19:55:36.042257  205530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/files for local assets ...
	I1006 19:55:36.042356  205530 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> 43502.pem in /etc/ssl/certs
	I1006 19:55:36.042465  205530 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 19:55:36.054578  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:55:36.081880  205530 start.go:296] duration metric: took 185.059839ms for postStartSetup
	I1006 19:55:36.082245  205530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-997276
	I1006 19:55:36.110424  205530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/config.json ...
	I1006 19:55:36.110718  205530 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 19:55:36.110780  205530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:55:36.135102  205530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:55:36.233218  205530 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 19:55:36.240232  205530 start.go:128] duration metric: took 11.867153691s to createHost
	I1006 19:55:36.240258  205530 start.go:83] releasing machines lock for "default-k8s-diff-port-997276", held for 11.867287314s
	I1006 19:55:36.240358  205530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-997276
	I1006 19:55:36.271876  205530 ssh_runner.go:195] Run: cat /version.json
	I1006 19:55:36.271927  205530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:55:36.272211  205530 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 19:55:36.272260  205530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:55:36.308533  205530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:55:36.311617  205530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:55:36.517106  205530 ssh_runner.go:195] Run: systemctl --version
	I1006 19:55:36.529300  205530 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 19:55:36.584053  205530 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 19:55:36.589347  205530 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 19:55:36.589418  205530 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 19:55:36.630789  205530 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1006 19:55:36.630864  205530 start.go:495] detecting cgroup driver to use...
	I1006 19:55:36.630922  205530 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 19:55:36.630996  205530 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 19:55:36.653757  205530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 19:55:36.669282  205530 docker.go:218] disabling cri-docker service (if available) ...
	I1006 19:55:36.669412  205530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 19:55:36.687827  205530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 19:55:36.709873  205530 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 19:55:36.873054  205530 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 19:55:37.061088  205530 docker.go:234] disabling docker service ...
	I1006 19:55:37.061205  205530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 19:55:37.085201  205530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 19:55:37.099510  205530 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 19:55:37.273048  205530 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 19:55:37.439166  205530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 19:55:37.454398  205530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 19:55:37.470811  205530 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 19:55:37.470914  205530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:55:37.480892  205530 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 19:55:37.480996  205530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:55:37.490619  205530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:55:37.500627  205530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:55:37.510371  205530 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 19:55:37.519984  205530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:55:37.530153  205530 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:55:37.546529  205530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:55:37.558696  205530 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 19:55:37.568084  205530 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 19:55:37.577070  205530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:55:37.740653  205530 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 19:55:38.139465  205530 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 19:55:38.139601  205530 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 19:55:38.146699  205530 start.go:563] Will wait 60s for crictl version
	I1006 19:55:38.146831  205530 ssh_runner.go:195] Run: which crictl
	I1006 19:55:38.151362  205530 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 19:55:38.210220  205530 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 19:55:38.210408  205530 ssh_runner.go:195] Run: crio --version
	I1006 19:55:38.249457  205530 ssh_runner.go:195] Run: crio --version
	I1006 19:55:38.292555  205530 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 19:55:38.295940  205530 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-997276 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:55:38.318849  205530 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1006 19:55:38.323415  205530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:55:38.337204  205530 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-997276 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-997276 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 19:55:38.337323  205530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:55:38.337383  205530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:55:38.383180  205530 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:55:38.383205  205530 crio.go:433] Images already preloaded, skipping extraction
	I1006 19:55:38.383267  205530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:55:38.412212  205530 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:55:38.412235  205530 cache_images.go:85] Images are preloaded, skipping loading
	I1006 19:55:38.412244  205530 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1006 19:55:38.412330  205530 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-997276 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-997276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 19:55:38.412414  205530 ssh_runner.go:195] Run: crio config
	I1006 19:55:38.488581  205530 cni.go:84] Creating CNI manager for ""
	I1006 19:55:38.488606  205530 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:55:38.488623  205530 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 19:55:38.488648  205530 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-997276 NodeName:default-k8s-diff-port-997276 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 19:55:38.488785  205530 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-997276"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 19:55:38.488861  205530 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 19:55:38.497928  205530 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 19:55:38.497999  205530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 19:55:38.506369  205530 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1006 19:55:38.521658  205530 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 19:55:38.536419  205530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1006 19:55:38.550863  205530 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1006 19:55:38.554837  205530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:55:38.565475  205530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:55:38.737921  205530 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:55:38.763116  205530 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276 for IP: 192.168.76.2
	I1006 19:55:38.763183  205530 certs.go:195] generating shared ca certs ...
	I1006 19:55:38.763221  205530 certs.go:227] acquiring lock for ca certs: {Name:mke29bef1b13829052576090d66d8864f7cbc64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:55:38.763408  205530 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key
	I1006 19:55:38.763478  205530 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key
	I1006 19:55:38.763500  205530 certs.go:257] generating profile certs ...
	I1006 19:55:38.763608  205530 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/client.key
	I1006 19:55:38.763646  205530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/client.crt with IP's: []
	I1006 19:55:38.856930  205530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/client.crt ...
	I1006 19:55:38.860985  205530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/client.crt: {Name:mkf67fc2f64c7a2ccbcdd74677adbd93488d1642 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:55:38.861223  205530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/client.key ...
	I1006 19:55:38.861297  205530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/client.key: {Name:mk8ac8080717b75118d00305ee1ec5b75f55e0eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:55:38.861460  205530 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.key.24aba503
	I1006 19:55:38.861510  205530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.crt.24aba503 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1006 19:55:39.393830  205530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.crt.24aba503 ...
	I1006 19:55:39.393912  205530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.crt.24aba503: {Name:mk87297d9542e70bca499539cae71710f9d860ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:55:39.395001  205530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.key.24aba503 ...
	I1006 19:55:39.395051  205530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.key.24aba503: {Name:mkcb2f53bece7cc459436bfb9db916aaf6edf647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:55:39.395196  205530 certs.go:382] copying /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.crt.24aba503 -> /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.crt
	I1006 19:55:39.395336  205530 certs.go:386] copying /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.key.24aba503 -> /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.key
	I1006 19:55:39.397596  205530 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/proxy-client.key
	I1006 19:55:39.397640  205530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/proxy-client.crt with IP's: []
	I1006 19:55:40.119973  205530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/proxy-client.crt ...
	I1006 19:55:40.120047  205530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/proxy-client.crt: {Name:mk227bc98fe0b9db02e4dc4b2380f575480009a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:55:40.120267  205530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/proxy-client.key ...
	I1006 19:55:40.120303  205530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/proxy-client.key: {Name:mk00a4f139a7cedbae2b90cf02b8ac6d766984a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:55:40.120574  205530 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem (1338 bytes)
	W1006 19:55:40.120647  205530 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350_empty.pem, impossibly tiny 0 bytes
	I1006 19:55:40.120671  205530 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 19:55:40.120726  205530 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem (1082 bytes)
	I1006 19:55:40.120776  205530 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem (1123 bytes)
	I1006 19:55:40.120834  205530 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem (1675 bytes)
	I1006 19:55:40.120909  205530 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:55:40.121520  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 19:55:40.145770  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 19:55:40.176613  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 19:55:40.203165  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 19:55:40.227418  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1006 19:55:40.249997  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 19:55:40.273550  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 19:55:40.325650  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 19:55:40.375797  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 19:55:40.398156  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem --> /usr/share/ca-certificates/4350.pem (1338 bytes)
	I1006 19:55:40.429456  205530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /usr/share/ca-certificates/43502.pem (1708 bytes)
	I1006 19:55:40.452237  205530 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 19:55:40.467345  205530 ssh_runner.go:195] Run: openssl version
	I1006 19:55:40.474454  205530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 19:55:40.483956  205530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:55:40.488271  205530 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:55:40.488412  205530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:55:40.532245  205530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 19:55:40.541271  205530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4350.pem && ln -fs /usr/share/ca-certificates/4350.pem /etc/ssl/certs/4350.pem"
	I1006 19:55:40.551493  205530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4350.pem
	I1006 19:55:40.556121  205530 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 18:49 /usr/share/ca-certificates/4350.pem
	I1006 19:55:40.556266  205530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4350.pem
	I1006 19:55:40.598715  205530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4350.pem /etc/ssl/certs/51391683.0"
	I1006 19:55:40.607644  205530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43502.pem && ln -fs /usr/share/ca-certificates/43502.pem /etc/ssl/certs/43502.pem"
	I1006 19:55:40.617099  205530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43502.pem
	I1006 19:55:40.621486  205530 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 18:49 /usr/share/ca-certificates/43502.pem
	I1006 19:55:40.621600  205530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43502.pem
	I1006 19:55:40.663975  205530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43502.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 19:55:40.673760  205530 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 19:55:40.678486  205530 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 19:55:40.678611  205530 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-997276 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-997276 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:55:40.678760  205530 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 19:55:40.678864  205530 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 19:55:40.709144  205530 cri.go:89] found id: ""
	I1006 19:55:40.709269  205530 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 19:55:40.719615  205530 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 19:55:40.728424  205530 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 19:55:40.728549  205530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 19:55:40.739447  205530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 19:55:40.739525  205530 kubeadm.go:157] found existing configuration files:
	
	I1006 19:55:40.739607  205530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1006 19:55:40.749173  205530 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 19:55:40.749289  205530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 19:55:40.758606  205530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1006 19:55:40.767967  205530 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 19:55:40.768080  205530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 19:55:40.776501  205530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1006 19:55:40.785724  205530 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 19:55:40.785839  205530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 19:55:40.794336  205530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1006 19:55:40.803201  205530 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 19:55:40.803310  205530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 19:55:40.811046  205530 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 19:55:40.861518  205530 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 19:55:40.862064  205530 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 19:55:40.904437  205530 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 19:55:40.904787  205530 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1006 19:55:40.904855  205530 kubeadm.go:318] OS: Linux
	I1006 19:55:40.904955  205530 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 19:55:40.905042  205530 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1006 19:55:40.905142  205530 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 19:55:40.905232  205530 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 19:55:40.905311  205530 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 19:55:40.905382  205530 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 19:55:40.905468  205530 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 19:55:40.905561  205530 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 19:55:40.905640  205530 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1006 19:55:41.008488  205530 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 19:55:41.008675  205530 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 19:55:41.008816  205530 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 19:55:41.025705  205530 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1006 19:55:38.192472  202624 pod_ready.go:104] pod "coredns-66bc5c9577-8k4cq" is not "Ready", error: <nil>
	W1006 19:55:40.194306  202624 pod_ready.go:104] pod "coredns-66bc5c9577-8k4cq" is not "Ready", error: <nil>
	W1006 19:55:42.195773  202624 pod_ready.go:104] pod "coredns-66bc5c9577-8k4cq" is not "Ready", error: <nil>
	I1006 19:55:41.032323  205530 out.go:252]   - Generating certificates and keys ...
	I1006 19:55:41.032497  205530 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 19:55:41.032603  205530 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 19:55:41.729302  205530 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 19:55:41.888092  205530 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 19:55:42.212259  205530 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 19:55:43.105882  205530 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 19:55:43.587669  205530 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 19:55:43.588343  205530 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-997276 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	W1006 19:55:44.696986  202624 pod_ready.go:104] pod "coredns-66bc5c9577-8k4cq" is not "Ready", error: <nil>
	W1006 19:55:47.192410  202624 pod_ready.go:104] pod "coredns-66bc5c9577-8k4cq" is not "Ready", error: <nil>
	I1006 19:55:44.240678  205530 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 19:55:44.241269  205530 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-997276 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1006 19:55:45.671407  205530 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 19:55:45.777740  205530 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 19:55:46.726732  205530 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 19:55:46.727296  205530 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 19:55:47.299820  205530 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 19:55:47.592234  205530 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 19:55:48.303930  205530 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 19:55:48.704894  205530 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 19:55:49.395205  205530 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 19:55:49.396209  205530 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 19:55:49.399087  205530 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1006 19:55:49.192658  202624 pod_ready.go:104] pod "coredns-66bc5c9577-8k4cq" is not "Ready", error: <nil>
	W1006 19:55:51.193231  202624 pod_ready.go:104] pod "coredns-66bc5c9577-8k4cq" is not "Ready", error: <nil>
	I1006 19:55:49.402724  205530 out.go:252]   - Booting up control plane ...
	I1006 19:55:49.402848  205530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 19:55:49.402948  205530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 19:55:49.403023  205530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 19:55:49.422181  205530 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 19:55:49.422560  205530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 19:55:49.430164  205530 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 19:55:49.430477  205530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 19:55:49.430527  205530 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 19:55:49.576158  205530 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 19:55:49.576287  205530 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 19:55:51.072117  205530 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501518624s
	I1006 19:55:51.075863  205530 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 19:55:51.075970  205530 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1006 19:55:51.076266  205530 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 19:55:51.076362  205530 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 19:55:54.597958  205530 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.521523913s
	I1006 19:55:57.470529  205530 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.394653523s
	I1006 19:55:57.578227  205530 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502124141s
	I1006 19:55:57.601203  205530 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1006 19:55:57.613770  205530 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1006 19:55:57.632952  205530 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1006 19:55:57.633175  205530 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-997276 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1006 19:55:57.644776  205530 kubeadm.go:318] [bootstrap-token] Using token: k5hkyd.rr7hzkr13nxttfpd
	W1006 19:55:53.694289  202624 pod_ready.go:104] pod "coredns-66bc5c9577-8k4cq" is not "Ready", error: <nil>
	W1006 19:55:56.192861  202624 pod_ready.go:104] pod "coredns-66bc5c9577-8k4cq" is not "Ready", error: <nil>
	I1006 19:55:57.647790  205530 out.go:252]   - Configuring RBAC rules ...
	I1006 19:55:57.647939  205530 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1006 19:55:57.657314  205530 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1006 19:55:57.666020  205530 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1006 19:55:57.670392  205530 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1006 19:55:57.674680  205530 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1006 19:55:57.678892  205530 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1006 19:55:57.985681  205530 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1006 19:55:58.455342  205530 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1006 19:55:58.988302  205530 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1006 19:55:58.989748  205530 kubeadm.go:318] 
	I1006 19:55:58.989825  205530 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1006 19:55:58.989831  205530 kubeadm.go:318] 
	I1006 19:55:58.989915  205530 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1006 19:55:58.989920  205530 kubeadm.go:318] 
	I1006 19:55:58.989947  205530 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1006 19:55:58.990377  205530 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1006 19:55:58.990436  205530 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1006 19:55:58.990441  205530 kubeadm.go:318] 
	I1006 19:55:58.990497  205530 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1006 19:55:58.990502  205530 kubeadm.go:318] 
	I1006 19:55:58.990552  205530 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1006 19:55:58.990557  205530 kubeadm.go:318] 
	I1006 19:55:58.990611  205530 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1006 19:55:58.990689  205530 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1006 19:55:58.990760  205530 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1006 19:55:58.990764  205530 kubeadm.go:318] 
	I1006 19:55:58.991077  205530 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1006 19:55:58.991165  205530 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1006 19:55:58.991171  205530 kubeadm.go:318] 
	I1006 19:55:58.991461  205530 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token k5hkyd.rr7hzkr13nxttfpd \
	I1006 19:55:58.991574  205530 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:201c13f59a2d25d619915dd6f8e3b060debb373b102f8c8209c331adce5a89dd \
	I1006 19:55:58.991813  205530 kubeadm.go:318] 	--control-plane 
	I1006 19:55:58.991832  205530 kubeadm.go:318] 
	I1006 19:55:58.992161  205530 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1006 19:55:58.992172  205530 kubeadm.go:318] 
	I1006 19:55:58.992524  205530 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token k5hkyd.rr7hzkr13nxttfpd \
	I1006 19:55:58.992645  205530 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:201c13f59a2d25d619915dd6f8e3b060debb373b102f8c8209c331adce5a89dd 
	I1006 19:55:58.997155  205530 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1006 19:55:58.997429  205530 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1006 19:55:58.997559  205530 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 19:55:58.997587  205530 cni.go:84] Creating CNI manager for ""
	I1006 19:55:58.997595  205530 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:55:59.000638  205530 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1006 19:55:59.003675  205530 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1006 19:55:59.008580  205530 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1006 19:55:59.008602  205530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	W1006 19:55:58.691238  202624 pod_ready.go:104] pod "coredns-66bc5c9577-8k4cq" is not "Ready", error: <nil>
	W1006 19:56:00.700334  202624 pod_ready.go:104] pod "coredns-66bc5c9577-8k4cq" is not "Ready", error: <nil>
	I1006 19:55:59.024844  205530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1006 19:55:59.321161  205530 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1006 19:55:59.321393  205530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:55:59.321552  205530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-997276 minikube.k8s.io/updated_at=2025_10_06T19_55_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81 minikube.k8s.io/name=default-k8s-diff-port-997276 minikube.k8s.io/primary=true
	I1006 19:55:59.519500  205530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:55:59.519565  205530 ops.go:34] apiserver oom_adj: -16
	I1006 19:56:00.023896  205530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:56:00.519646  205530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:56:01.020194  205530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:56:01.520326  205530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:56:02.022595  205530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:56:02.519833  205530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:56:03.026546  205530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:56:03.520113  205530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:56:04.020219  205530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:56:04.520116  205530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:56:04.709526  205530 kubeadm.go:1113] duration metric: took 5.388209939s to wait for elevateKubeSystemPrivileges
	I1006 19:56:04.709560  205530 kubeadm.go:402] duration metric: took 24.030955182s to StartCluster
	I1006 19:56:04.709580  205530 settings.go:142] acquiring lock: {Name:mkaade96c66593c653256adaeb0a3029ca60e0c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:56:04.709666  205530 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:56:04.711677  205530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/kubeconfig: {Name:mkf8cce8f9dace5c4d41967d6ad8df5e03ba53f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:56:04.712256  205530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1006 19:56:04.712322  205530 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 19:56:04.712535  205530 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 19:56:04.712610  205530 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-997276"
	I1006 19:56:04.712623  205530 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-997276"
	I1006 19:56:04.712647  205530 host.go:66] Checking if "default-k8s-diff-port-997276" exists ...
	I1006 19:56:04.712526  205530 config.go:182] Loaded profile config "default-k8s-diff-port-997276": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:56:04.713027  205530 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-997276"
	I1006 19:56:04.713053  205530 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-997276"
	I1006 19:56:04.713096  205530 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997276 --format={{.State.Status}}
	I1006 19:56:04.713439  205530 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997276 --format={{.State.Status}}
	I1006 19:56:04.719760  205530 out.go:179] * Verifying Kubernetes components...
	I1006 19:56:04.723320  205530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:56:04.762024  205530 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-997276"
	I1006 19:56:04.762196  205530 host.go:66] Checking if "default-k8s-diff-port-997276" exists ...
	I1006 19:56:04.762931  205530 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997276 --format={{.State.Status}}
	I1006 19:56:04.788754  205530 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1006 19:56:03.192206  202624 pod_ready.go:104] pod "coredns-66bc5c9577-8k4cq" is not "Ready", error: <nil>
	I1006 19:56:04.191511  202624 pod_ready.go:94] pod "coredns-66bc5c9577-8k4cq" is "Ready"
	I1006 19:56:04.191536  202624 pod_ready.go:86] duration metric: took 32.50558766s for pod "coredns-66bc5c9577-8k4cq" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:04.194202  202624 pod_ready.go:83] waiting for pod "etcd-embed-certs-830393" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:04.199267  202624 pod_ready.go:94] pod "etcd-embed-certs-830393" is "Ready"
	I1006 19:56:04.199294  202624 pod_ready.go:86] duration metric: took 5.063985ms for pod "etcd-embed-certs-830393" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:04.201916  202624 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-830393" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:04.207381  202624 pod_ready.go:94] pod "kube-apiserver-embed-certs-830393" is "Ready"
	I1006 19:56:04.207426  202624 pod_ready.go:86] duration metric: took 5.481437ms for pod "kube-apiserver-embed-certs-830393" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:04.210389  202624 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-830393" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:04.391748  202624 pod_ready.go:94] pod "kube-controller-manager-embed-certs-830393" is "Ready"
	I1006 19:56:04.391780  202624 pod_ready.go:86] duration metric: took 181.362549ms for pod "kube-controller-manager-embed-certs-830393" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:04.590344  202624 pod_ready.go:83] waiting for pod "kube-proxy-xl5tt" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:04.989638  202624 pod_ready.go:94] pod "kube-proxy-xl5tt" is "Ready"
	I1006 19:56:04.989672  202624 pod_ready.go:86] duration metric: took 399.255074ms for pod "kube-proxy-xl5tt" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:05.190176  202624 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-830393" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:05.589734  202624 pod_ready.go:94] pod "kube-scheduler-embed-certs-830393" is "Ready"
	I1006 19:56:05.589769  202624 pod_ready.go:86] duration metric: took 399.564079ms for pod "kube-scheduler-embed-certs-830393" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:05.589782  202624 pod_ready.go:40] duration metric: took 33.907581556s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 19:56:05.691865  202624 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1006 19:56:05.695093  202624 out.go:179] * Done! kubectl is now configured to use "embed-certs-830393" cluster and "default" namespace by default
	I1006 19:56:04.800205  205530 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 19:56:04.800230  205530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 19:56:04.800313  205530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:56:04.808260  205530 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 19:56:04.808279  205530 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 19:56:04.808344  205530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:56:04.839523  205530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:56:04.861412  205530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:56:05.021506  205530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 19:56:05.137871  205530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 19:56:05.206197  205530 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:56:05.206436  205530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1006 19:56:06.077784  205530 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-997276" to be "Ready" ...
	I1006 19:56:06.078020  205530 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1006 19:56:06.082304  205530 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1006 19:56:06.085443  205530 addons.go:514] duration metric: took 1.372885884s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1006 19:56:06.581686  205530 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-997276" context rescaled to 1 replicas
	W1006 19:56:08.080827  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	W1006 19:56:10.581013  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	W1006 19:56:13.081695  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	W1006 19:56:15.082166  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	W1006 19:56:17.581562  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 06 19:56:00 embed-certs-830393 crio[651]: time="2025-10-06T19:56:00.154459433Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=204003fa-9342-4e83-83c5-1ca9caa3149c name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:56:00 embed-certs-830393 crio[651]: time="2025-10-06T19:56:00.155796647Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=5c16859e-8160-4bed-b424-1cc04e673cf2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:56:00 embed-certs-830393 crio[651]: time="2025-10-06T19:56:00.156187621Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:56:00 embed-certs-830393 crio[651]: time="2025-10-06T19:56:00.256834272Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:56:00 embed-certs-830393 crio[651]: time="2025-10-06T19:56:00.257100484Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/319f9ba7e737005b7bb19c1f326585a9647858af4060a0961b26ef78185c44f2/merged/etc/passwd: no such file or directory"
	Oct 06 19:56:00 embed-certs-830393 crio[651]: time="2025-10-06T19:56:00.25713827Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/319f9ba7e737005b7bb19c1f326585a9647858af4060a0961b26ef78185c44f2/merged/etc/group: no such file or directory"
	Oct 06 19:56:00 embed-certs-830393 crio[651]: time="2025-10-06T19:56:00.257530982Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:56:00 embed-certs-830393 crio[651]: time="2025-10-06T19:56:00.376288273Z" level=info msg="Created container f4d2a9fccfdd6f393221414a6c0aa8a77d35201a86d3a04c6be9f91649344122: kube-system/storage-provisioner/storage-provisioner" id=5c16859e-8160-4bed-b424-1cc04e673cf2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:56:00 embed-certs-830393 crio[651]: time="2025-10-06T19:56:00.378126115Z" level=info msg="Starting container: f4d2a9fccfdd6f393221414a6c0aa8a77d35201a86d3a04c6be9f91649344122" id=c8b89383-96da-4aca-aecd-9bc70da6e760 name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 19:56:00 embed-certs-830393 crio[651]: time="2025-10-06T19:56:00.38101411Z" level=info msg="Started container" PID=1644 containerID=f4d2a9fccfdd6f393221414a6c0aa8a77d35201a86d3a04c6be9f91649344122 description=kube-system/storage-provisioner/storage-provisioner id=c8b89383-96da-4aca-aecd-9bc70da6e760 name=/runtime.v1.RuntimeService/StartContainer sandboxID=138c5c6a9e75f2c957c36d7e7c145c6349f4250be57261a3833a12ef1de97899
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.704971719Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.71582191Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.715855872Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.715883138Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.71937715Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.719530343Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.719607637Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.723149281Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.723187493Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.723212379Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.728037288Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.728073432Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.728097079Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.731445259Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:56:09 embed-certs-830393 crio[651]: time="2025-10-06T19:56:09.7314795Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f4d2a9fccfdd6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago       Running             storage-provisioner         2                   138c5c6a9e75f       storage-provisioner                          kube-system
	402ce39bb0201       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago       Exited              dashboard-metrics-scraper   2                   5eb0b48f2176a       dashboard-metrics-scraper-6ffb444bf9-rhnrq   kubernetes-dashboard
	02e7281204657       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago       Running             kubernetes-dashboard        0                   0b55399d2fc79       kubernetes-dashboard-855c9754f9-dg6tb        kubernetes-dashboard
	f683a297d5110       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago       Exited              storage-provisioner         1                   138c5c6a9e75f       storage-provisioner                          kube-system
	e94dede38eab4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           53 seconds ago       Running             kube-proxy                  1                   7f426bcae331d       kube-proxy-xl5tt                             kube-system
	53fa3c45d9d3f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago       Running             coredns                     1                   d8c1f7f60dd66       coredns-66bc5c9577-8k4cq                     kube-system
	2bfab853c45dc       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   23688320bc7e9       busybox                                      default
	e2adf0d6acf04       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   5d5fcae9b370f       kindnet-g7jnc                                kube-system
	feaea5c591582       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   8a9a8c2203f5c       kube-controller-manager-embed-certs-830393   kube-system
	5aac2d3517c5e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   74415b04ebeda       kube-scheduler-embed-certs-830393            kube-system
	5d7f3ed046188       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   f8d0fb46bb3c6       etcd-embed-certs-830393                      kube-system
	1ca5cdfb3593e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   e0643df874155       kube-apiserver-embed-certs-830393            kube-system
	
	
	==> coredns [53fa3c45d9d3fe4b234e83920d4c40b4da0c696f8345d0661f1282310ea0a883] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43496 - 26265 "HINFO IN 6717465657593695425.5428441928682522278. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00472253s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-830393
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-830393
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81
	                    minikube.k8s.io/name=embed-certs-830393
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_06T19_53_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 06 Oct 2025 19:53:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-830393
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 06 Oct 2025 19:56:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 06 Oct 2025 19:55:59 +0000   Mon, 06 Oct 2025 19:53:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 06 Oct 2025 19:55:59 +0000   Mon, 06 Oct 2025 19:53:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 06 Oct 2025 19:55:59 +0000   Mon, 06 Oct 2025 19:53:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 06 Oct 2025 19:55:59 +0000   Mon, 06 Oct 2025 19:54:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-830393
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7bfe5a7d390a49d3be8226fc84b92394
	  System UUID:                f887c677-54f6-492d-93f4-e65ae4538988
	  Boot ID:                    28123a58-a718-41b5-bac7-da83ff2d3a0d
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-8k4cq                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m19s
	  kube-system                 etcd-embed-certs-830393                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m24s
	  kube-system                 kindnet-g7jnc                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m19s
	  kube-system                 kube-apiserver-embed-certs-830393             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-controller-manager-embed-certs-830393    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-proxy-xl5tt                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-scheduler-embed-certs-830393             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-rhnrq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-dg6tb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m18s                  kube-proxy       
	  Normal   Starting                 52s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m33s (x8 over 2m33s)  kubelet          Node embed-certs-830393 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m33s (x8 over 2m33s)  kubelet          Node embed-certs-830393 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m33s (x8 over 2m33s)  kubelet          Node embed-certs-830393 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m25s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m25s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m24s                  kubelet          Node embed-certs-830393 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m24s                  kubelet          Node embed-certs-830393 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m24s                  kubelet          Node embed-certs-830393 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m20s                  node-controller  Node embed-certs-830393 event: Registered Node embed-certs-830393 in Controller
	  Normal   NodeReady                98s                    kubelet          Node embed-certs-830393 status is now: NodeReady
	  Normal   Starting                 63s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 63s)      kubelet          Node embed-certs-830393 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 63s)      kubelet          Node embed-certs-830393 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 63s)      kubelet          Node embed-certs-830393 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           51s                    node-controller  Node embed-certs-830393 event: Registered Node embed-certs-830393 in Controller
	
	
	==> dmesg <==
	[Oct 6 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:26] overlayfs: idmapped layers are currently not supported
	[ +26.009516] overlayfs: idmapped layers are currently not supported
	[  +1.884868] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:28] overlayfs: idmapped layers are currently not supported
	[ +38.864395] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:29] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:30] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:31] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:35] overlayfs: idmapped layers are currently not supported
	[ +30.685217] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:37] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:39] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:41] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:48] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:49] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:50] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:52] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:54] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:55] overlayfs: idmapped layers are currently not supported
	[ +29.761517] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5d7f3ed0461886cd83205860bc71296168bd61a981970cd2dc78f60ba117030e] <==
	{"level":"warn","ts":"2025-10-06T19:55:25.969615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.013977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.071925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.092181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.142276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.191120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.244117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.294297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.343913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.377818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.423966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.494194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.543504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.560710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.578028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.618752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:26.737172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54246","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-06T19:55:29.171588Z","caller":"traceutil/trace.go:172","msg":"trace[129443978] transaction","detail":"{read_only:false; number_of_response:0; response_revision:503; }","duration":"106.839389ms","start":"2025-10-06T19:55:29.064732Z","end":"2025-10-06T19:55:29.171571Z","steps":["trace[129443978] 'process raft request'  (duration: 62.695423ms)","trace[129443978] 'compare'  (duration: 44.018359ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-06T19:55:29.332885Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.98837ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-admin\" limit:1 ","response":"range_response_count:1 size:840"}
	{"level":"info","ts":"2025-10-06T19:55:29.332952Z","caller":"traceutil/trace.go:172","msg":"trace[948330574] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-admin; range_end:; response_count:1; response_revision:507; }","duration":"111.077151ms","start":"2025-10-06T19:55:29.221862Z","end":"2025-10-06T19:55:29.332939Z","steps":["trace[948330574] 'agreement among raft nodes before linearized reading'  (duration: 57.860562ms)","trace[948330574] 'range keys from in-memory index tree'  (duration: 53.056061ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-06T19:55:29.333194Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.21871ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/embed-certs-830393.186bff0f5ddac015\" limit:1 ","response":"range_response_count:1 size:723"}
	{"level":"info","ts":"2025-10-06T19:55:29.333261Z","caller":"traceutil/trace.go:172","msg":"trace[2020034856] transaction","detail":"{read_only:false; response_revision:508; number_of_response:1; }","duration":"124.01386ms","start":"2025-10-06T19:55:29.209237Z","end":"2025-10-06T19:55:29.333251Z","steps":["trace[2020034856] 'process raft request'  (duration: 70.538764ms)","trace[2020034856] 'compare'  (duration: 52.940851ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-06T19:55:29.333586Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"103.366399ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" limit:1 ","response":"range_response_count:1 size:721"}
	{"level":"info","ts":"2025-10-06T19:55:29.333623Z","caller":"traceutil/trace.go:172","msg":"trace[1163279525] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:508; }","duration":"103.405538ms","start":"2025-10-06T19:55:29.230209Z","end":"2025-10-06T19:55:29.333614Z","steps":["trace[1163279525] 'agreement among raft nodes before linearized reading'  (duration: 103.31825ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-06T19:55:29.333280Z","caller":"traceutil/trace.go:172","msg":"trace[2040801756] range","detail":"{range_begin:/registry/events/default/embed-certs-830393.186bff0f5ddac015; range_end:; response_count:1; response_revision:508; }","duration":"107.312956ms","start":"2025-10-06T19:55:29.225956Z","end":"2025-10-06T19:55:29.333269Z","steps":["trace[2040801756] 'agreement among raft nodes before linearized reading'  (duration: 107.148555ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:56:23 up  1:38,  0 user,  load average: 4.10, 2.82, 2.10
	Linux embed-certs-830393 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e2adf0d6acf04da953a1454524da4257845c1f019a6c67dbf5feef047497298e] <==
	I1006 19:55:29.485884       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1006 19:55:29.493176       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1006 19:55:29.493337       1 main.go:148] setting mtu 1500 for CNI 
	I1006 19:55:29.493350       1 main.go:178] kindnetd IP family: "ipv4"
	I1006 19:55:29.493361       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-06T19:55:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1006 19:55:29.746873       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1006 19:55:29.746912       1 controller.go:381] "Waiting for informer caches to sync"
	I1006 19:55:29.746980       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1006 19:55:29.747112       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1006 19:55:59.703158       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1006 19:55:59.708772       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1006 19:55:59.708779       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1006 19:55:59.708959       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1006 19:56:01.047280       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1006 19:56:01.047394       1 metrics.go:72] Registering metrics
	I1006 19:56:01.047501       1 controller.go:711] "Syncing nftables rules"
	I1006 19:56:09.704654       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1006 19:56:09.704741       1 main.go:301] handling current node
	I1006 19:56:19.707773       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1006 19:56:19.707808       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1ca5cdfb3593e4dad733094500a9fe2b21c50f4f50a60370c3764e89baad98ac] <==
	I1006 19:55:28.544912       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1006 19:55:28.556482       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1006 19:55:28.559360       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1006 19:55:28.627566       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1006 19:55:28.627609       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1006 19:55:28.627662       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1006 19:55:28.637304       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1006 19:55:28.647993       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1006 19:55:28.648741       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1006 19:55:28.650326       1 aggregator.go:171] initial CRD sync complete...
	I1006 19:55:28.650341       1 autoregister_controller.go:144] Starting autoregister controller
	I1006 19:55:28.650348       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1006 19:55:28.650354       1 cache.go:39] Caches are synced for autoregister controller
	I1006 19:55:28.660863       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1006 19:55:28.834387       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1006 19:55:28.857603       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1006 19:55:29.937253       1 controller.go:667] quota admission added evaluator for: namespaces
	I1006 19:55:30.326863       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1006 19:55:30.478449       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1006 19:55:30.536575       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1006 19:55:30.774019       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.102.84"}
	I1006 19:55:30.907406       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.211.64"}
	I1006 19:55:33.124886       1 controller.go:667] quota admission added evaluator for: endpoints
	I1006 19:55:33.377264       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1006 19:55:33.474237       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [feaea5c59158221450ddfd0bf1e25c002e959fd1e69f33f065cfebc4fa3ff1dc] <==
	I1006 19:55:32.929202       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1006 19:55:32.931487       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1006 19:55:32.941159       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1006 19:55:32.941276       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1006 19:55:32.947410       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1006 19:55:32.950666       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1006 19:55:32.953551       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1006 19:55:32.957509       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1006 19:55:32.959689       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1006 19:55:32.966448       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1006 19:55:32.967883       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1006 19:55:32.968055       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1006 19:55:32.968105       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1006 19:55:32.968522       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1006 19:55:32.968596       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1006 19:55:32.968785       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1006 19:55:32.968994       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-830393"
	I1006 19:55:32.969231       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1006 19:55:32.968838       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1006 19:55:32.971428       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 19:55:32.985336       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1006 19:55:32.987669       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1006 19:55:32.999268       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 19:55:32.999370       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1006 19:55:32.999402       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [e94dede38eab415b5d932275312bb9c5abecf5f78916a03894c7a79d42672cd6] <==
	I1006 19:55:29.806125       1 server_linux.go:53] "Using iptables proxy"
	I1006 19:55:30.070426       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1006 19:55:30.181412       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1006 19:55:30.191095       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1006 19:55:30.191314       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1006 19:55:30.514631       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 19:55:30.514702       1 server_linux.go:132] "Using iptables Proxier"
	I1006 19:55:30.589618       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1006 19:55:30.590087       1 server.go:527] "Version info" version="v1.34.1"
	I1006 19:55:30.590336       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:55:30.591960       1 config.go:200] "Starting service config controller"
	I1006 19:55:30.597494       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1006 19:55:30.597614       1 config.go:106] "Starting endpoint slice config controller"
	I1006 19:55:30.597621       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1006 19:55:30.597671       1 config.go:403] "Starting serviceCIDR config controller"
	I1006 19:55:30.597687       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1006 19:55:30.622617       1 config.go:309] "Starting node config controller"
	I1006 19:55:30.623540       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1006 19:55:30.623632       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1006 19:55:30.697772       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1006 19:55:30.697805       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1006 19:55:30.698548       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5aac2d3517c5e661bd5a3fa05304330ba6bdf4006024e9d2204b7537e85be506] <==
	I1006 19:55:25.959557       1 serving.go:386] Generated self-signed cert in-memory
	I1006 19:55:30.044273       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1006 19:55:30.044402       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:55:30.117338       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1006 19:55:30.126581       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1006 19:55:30.126639       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1006 19:55:30.126850       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1006 19:55:30.126660       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:55:30.129127       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:55:30.126672       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 19:55:30.129192       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 19:55:30.226887       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1006 19:55:30.229444       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 19:55:30.229531       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 06 19:55:33 embed-certs-830393 kubelet[777]: I1006 19:55:33.629298     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lggcq\" (UniqueName: \"kubernetes.io/projected/8d3356fd-da8c-4b19-9b5c-acf2329fb3d9-kube-api-access-lggcq\") pod \"kubernetes-dashboard-855c9754f9-dg6tb\" (UID: \"8d3356fd-da8c-4b19-9b5c-acf2329fb3d9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dg6tb"
	Oct 06 19:55:33 embed-certs-830393 kubelet[777]: I1006 19:55:33.629397     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d6451a59-b052-443e-ae51-67be77436167-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-rhnrq\" (UID: \"d6451a59-b052-443e-ae51-67be77436167\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhnrq"
	Oct 06 19:55:33 embed-certs-830393 kubelet[777]: I1006 19:55:33.629474     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8d3356fd-da8c-4b19-9b5c-acf2329fb3d9-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-dg6tb\" (UID: \"8d3356fd-da8c-4b19-9b5c-acf2329fb3d9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dg6tb"
	Oct 06 19:55:33 embed-certs-830393 kubelet[777]: I1006 19:55:33.683361     777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 06 19:55:33 embed-certs-830393 kubelet[777]: W1006 19:55:33.920022     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/db0504489522a7f76847d7ee94754fe876f97596d0aefbe180d9ae204c670a0a/crio-5eb0b48f2176a14c2af0828957d1dfdd28a51b00d8f1ce18f2fa4b33b84015b8 WatchSource:0}: Error finding container 5eb0b48f2176a14c2af0828957d1dfdd28a51b00d8f1ce18f2fa4b33b84015b8: Status 404 returned error can't find the container with id 5eb0b48f2176a14c2af0828957d1dfdd28a51b00d8f1ce18f2fa4b33b84015b8
	Oct 06 19:55:33 embed-certs-830393 kubelet[777]: W1006 19:55:33.940588     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/db0504489522a7f76847d7ee94754fe876f97596d0aefbe180d9ae204c670a0a/crio-0b55399d2fc7987a92e9f4012e8405bc2c4a6070c29eaaf8c5227aab29beac1b WatchSource:0}: Error finding container 0b55399d2fc7987a92e9f4012e8405bc2c4a6070c29eaaf8c5227aab29beac1b: Status 404 returned error can't find the container with id 0b55399d2fc7987a92e9f4012e8405bc2c4a6070c29eaaf8c5227aab29beac1b
	Oct 06 19:55:40 embed-certs-830393 kubelet[777]: I1006 19:55:40.011150     777 scope.go:117] "RemoveContainer" containerID="479fc7e6248e67e1b0cadaa08f93446279e02a7ffe2ac958e06c00bfa5b00883"
	Oct 06 19:55:41 embed-certs-830393 kubelet[777]: I1006 19:55:41.007735     777 scope.go:117] "RemoveContainer" containerID="479fc7e6248e67e1b0cadaa08f93446279e02a7ffe2ac958e06c00bfa5b00883"
	Oct 06 19:55:41 embed-certs-830393 kubelet[777]: I1006 19:55:41.008055     777 scope.go:117] "RemoveContainer" containerID="f6489245180f1ae178f1034abef0b8e9d237b61c350f58f1beb9e66e1a7c9fe3"
	Oct 06 19:55:41 embed-certs-830393 kubelet[777]: E1006 19:55:41.008207     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rhnrq_kubernetes-dashboard(d6451a59-b052-443e-ae51-67be77436167)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhnrq" podUID="d6451a59-b052-443e-ae51-67be77436167"
	Oct 06 19:55:42 embed-certs-830393 kubelet[777]: I1006 19:55:42.013154     777 scope.go:117] "RemoveContainer" containerID="f6489245180f1ae178f1034abef0b8e9d237b61c350f58f1beb9e66e1a7c9fe3"
	Oct 06 19:55:42 embed-certs-830393 kubelet[777]: E1006 19:55:42.013341     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rhnrq_kubernetes-dashboard(d6451a59-b052-443e-ae51-67be77436167)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhnrq" podUID="d6451a59-b052-443e-ae51-67be77436167"
	Oct 06 19:55:43 embed-certs-830393 kubelet[777]: I1006 19:55:43.891299     777 scope.go:117] "RemoveContainer" containerID="f6489245180f1ae178f1034abef0b8e9d237b61c350f58f1beb9e66e1a7c9fe3"
	Oct 06 19:55:43 embed-certs-830393 kubelet[777]: E1006 19:55:43.896210     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rhnrq_kubernetes-dashboard(d6451a59-b052-443e-ae51-67be77436167)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhnrq" podUID="d6451a59-b052-443e-ae51-67be77436167"
	Oct 06 19:55:54 embed-certs-830393 kubelet[777]: I1006 19:55:54.750020     777 scope.go:117] "RemoveContainer" containerID="f6489245180f1ae178f1034abef0b8e9d237b61c350f58f1beb9e66e1a7c9fe3"
	Oct 06 19:55:55 embed-certs-830393 kubelet[777]: I1006 19:55:55.055980     777 scope.go:117] "RemoveContainer" containerID="f6489245180f1ae178f1034abef0b8e9d237b61c350f58f1beb9e66e1a7c9fe3"
	Oct 06 19:55:55 embed-certs-830393 kubelet[777]: I1006 19:55:55.056417     777 scope.go:117] "RemoveContainer" containerID="402ce39bb02015c167602ffe294d21c2474937b76810fa99052cb19af2dbce0b"
	Oct 06 19:55:55 embed-certs-830393 kubelet[777]: E1006 19:55:55.056597     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rhnrq_kubernetes-dashboard(d6451a59-b052-443e-ae51-67be77436167)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhnrq" podUID="d6451a59-b052-443e-ae51-67be77436167"
	Oct 06 19:55:55 embed-certs-830393 kubelet[777]: I1006 19:55:55.086122     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dg6tb" podStartSLOduration=9.889446733 podStartE2EDuration="22.086103232s" podCreationTimestamp="2025-10-06 19:55:33 +0000 UTC" firstStartedPulling="2025-10-06 19:55:33.943273182 +0000 UTC m=+13.467695932" lastFinishedPulling="2025-10-06 19:55:46.139929681 +0000 UTC m=+25.664352431" observedRunningTime="2025-10-06 19:55:47.045972572 +0000 UTC m=+26.570395339" watchObservedRunningTime="2025-10-06 19:55:55.086103232 +0000 UTC m=+34.610525990"
	Oct 06 19:56:00 embed-certs-830393 kubelet[777]: I1006 19:56:00.129668     777 scope.go:117] "RemoveContainer" containerID="f683a297d5110be5e2db1398d5d9f375e296e6251a59736d916eadc918fcd276"
	Oct 06 19:56:03 embed-certs-830393 kubelet[777]: I1006 19:56:03.891993     777 scope.go:117] "RemoveContainer" containerID="402ce39bb02015c167602ffe294d21c2474937b76810fa99052cb19af2dbce0b"
	Oct 06 19:56:03 embed-certs-830393 kubelet[777]: E1006 19:56:03.892647     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rhnrq_kubernetes-dashboard(d6451a59-b052-443e-ae51-67be77436167)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rhnrq" podUID="d6451a59-b052-443e-ae51-67be77436167"
	Oct 06 19:56:18 embed-certs-830393 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 06 19:56:18 embed-certs-830393 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 06 19:56:18 embed-certs-830393 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [02e72812046572c20424f4bde19f4301ef6c72f7c0f044b1a3c716bcfe7a2943] <==
	2025/10/06 19:55:46 Starting overwatch
	2025/10/06 19:55:46 Using namespace: kubernetes-dashboard
	2025/10/06 19:55:46 Using in-cluster config to connect to apiserver
	2025/10/06 19:55:46 Using secret token for csrf signing
	2025/10/06 19:55:46 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/06 19:55:46 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/06 19:55:46 Successful initial request to the apiserver, version: v1.34.1
	2025/10/06 19:55:46 Generating JWE encryption key
	2025/10/06 19:55:46 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/06 19:55:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/06 19:55:46 Initializing JWE encryption key from synchronized object
	2025/10/06 19:55:46 Creating in-cluster Sidecar client
	2025/10/06 19:55:46 Serving insecurely on HTTP port: 9090
	2025/10/06 19:55:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/06 19:56:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [f4d2a9fccfdd6f393221414a6c0aa8a77d35201a86d3a04c6be9f91649344122] <==
	I1006 19:56:00.425812       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1006 19:56:00.501718       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1006 19:56:00.501775       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1006 19:56:00.505696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:03.971866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:08.232526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:11.830908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:14.883882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:17.906235       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:17.912121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1006 19:56:17.912272       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1006 19:56:17.912343       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a38e7b44-d976-40ca-8b2f-247b56a0f9cb", APIVersion:"v1", ResourceVersion:"682", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-830393_b74e118c-ce89-4499-84fa-817dad0addca became leader
	I1006 19:56:17.912434       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-830393_b74e118c-ce89-4499-84fa-817dad0addca!
	W1006 19:56:17.915946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:17.928149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1006 19:56:18.016135       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-830393_b74e118c-ce89-4499-84fa-817dad0addca!
	W1006 19:56:19.931055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:19.935832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:21.941287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:21.947693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f683a297d5110be5e2db1398d5d9f375e296e6251a59736d916eadc918fcd276] <==
	I1006 19:55:29.885542       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1006 19:55:59.900974       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-830393 -n embed-certs-830393
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-830393 -n embed-certs-830393: exit status 2 (373.895466ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-830393 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-997276 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-997276 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (333.231813ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:56:57Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-997276 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-997276 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-997276 describe deploy/metrics-server -n kube-system: exit status 1 (100.452048ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-997276 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-997276
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-997276:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4fc3831db94848cd06748cc5e8c8533f2de0215c11120be2aa7b676a4a1c113b",
	        "Created": "2025-10-06T19:55:30.333531639Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 206091,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T19:55:30.401065672Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/4fc3831db94848cd06748cc5e8c8533f2de0215c11120be2aa7b676a4a1c113b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4fc3831db94848cd06748cc5e8c8533f2de0215c11120be2aa7b676a4a1c113b/hostname",
	        "HostsPath": "/var/lib/docker/containers/4fc3831db94848cd06748cc5e8c8533f2de0215c11120be2aa7b676a4a1c113b/hosts",
	        "LogPath": "/var/lib/docker/containers/4fc3831db94848cd06748cc5e8c8533f2de0215c11120be2aa7b676a4a1c113b/4fc3831db94848cd06748cc5e8c8533f2de0215c11120be2aa7b676a4a1c113b-json.log",
	        "Name": "/default-k8s-diff-port-997276",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-997276:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-997276",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4fc3831db94848cd06748cc5e8c8533f2de0215c11120be2aa7b676a4a1c113b",
	                "LowerDir": "/var/lib/docker/overlay2/ca41d0f8b7cb8f1b3af001a8f5128ec86c614fe6ad93ed4708e799d1261586b4-init/diff:/var/lib/docker/overlay2/950dad8a7486c25883a9845454dded3949e8f3a53166d005e6749ff210bcab80/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ca41d0f8b7cb8f1b3af001a8f5128ec86c614fe6ad93ed4708e799d1261586b4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ca41d0f8b7cb8f1b3af001a8f5128ec86c614fe6ad93ed4708e799d1261586b4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ca41d0f8b7cb8f1b3af001a8f5128ec86c614fe6ad93ed4708e799d1261586b4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-997276",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-997276/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-997276",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-997276",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-997276",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "419e718623c08e0e2c88b213ae57460a98a5bec1a8996662de4dcb90ea1ea9a9",
	            "SandboxKey": "/var/run/docker/netns/419e718623c0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-997276": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:37:f9:31:23:c9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2e10a72004c0565fee9f56eb617f1837118ee48bf9bd5cadbc46998fb4ed527c",
	                    "EndpointID": "993fb8a88e2e15f9f29eab650c1fcd0ec1e03594d192775348ff47bdf55f4d49",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-997276",
	                        "4fc3831db948"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-997276 -n default-k8s-diff-port-997276
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-997276 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-997276 logs -n 25: (1.538125801s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p old-k8s-version-100545 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-100545       │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │                     │
	│ delete  │ -p old-k8s-version-100545                                                                                                                                                                                                                     │ old-k8s-version-100545       │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:52 UTC │
	│ delete  │ -p old-k8s-version-100545                                                                                                                                                                                                                     │ old-k8s-version-100545       │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:52 UTC │
	│ start   │ -p no-preload-314275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:53 UTC │
	│ delete  │ -p cert-expiration-585086                                                                                                                                                                                                                     │ cert-expiration-585086       │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │ 06 Oct 25 19:53 UTC │
	│ start   │ -p embed-certs-830393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │ 06 Oct 25 19:54 UTC │
	│ addons  │ enable metrics-server -p no-preload-314275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │                     │
	│ stop    │ -p no-preload-314275 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │ 06 Oct 25 19:54 UTC │
	│ addons  │ enable dashboard -p no-preload-314275 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:54 UTC │ 06 Oct 25 19:54 UTC │
	│ start   │ -p no-preload-314275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:54 UTC │ 06 Oct 25 19:55 UTC │
	│ addons  │ enable metrics-server -p embed-certs-830393 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:54 UTC │                     │
	│ stop    │ -p embed-certs-830393 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ addons  │ enable dashboard -p embed-certs-830393 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ pause   │ -p no-preload-314275 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │                     │
	│ start   │ -p embed-certs-830393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:56 UTC │
	│ delete  │ -p no-preload-314275                                                                                                                                                                                                                          │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ delete  │ -p no-preload-314275                                                                                                                                                                                                                          │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ delete  │ -p disable-driver-mounts-932453                                                                                                                                                                                                               │ disable-driver-mounts-932453 │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ start   │ -p default-k8s-diff-port-997276 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:56 UTC │
	│ image   │ embed-certs-830393 image list --format=json                                                                                                                                                                                                   │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │ 06 Oct 25 19:56 UTC │
	│ pause   │ -p embed-certs-830393 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │                     │
	│ delete  │ -p embed-certs-830393                                                                                                                                                                                                                         │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │ 06 Oct 25 19:56 UTC │
	│ delete  │ -p embed-certs-830393                                                                                                                                                                                                                         │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │ 06 Oct 25 19:56 UTC │
	│ start   │ -p newest-cni-988436 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-997276 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 19:56:27
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 19:56:27.357993  210052 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:56:27.358117  210052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:56:27.358176  210052 out.go:374] Setting ErrFile to fd 2...
	I1006 19:56:27.358181  210052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:56:27.358452  210052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:56:27.358877  210052 out.go:368] Setting JSON to false
	I1006 19:56:27.359821  210052 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5923,"bootTime":1759774665,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1006 19:56:27.359887  210052 start.go:140] virtualization:  
	I1006 19:56:27.363905  210052 out.go:179] * [newest-cni-988436] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 19:56:27.367891  210052 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 19:56:27.368035  210052 notify.go:220] Checking for updates...
	I1006 19:56:27.373770  210052 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 19:56:27.376837  210052 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:56:27.379825  210052 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	I1006 19:56:27.383158  210052 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 19:56:27.386221  210052 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 19:56:27.390345  210052 config.go:182] Loaded profile config "default-k8s-diff-port-997276": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:56:27.390460  210052 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 19:56:27.417252  210052 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 19:56:27.417372  210052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:56:27.489240  210052 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:56:27.479471565 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:56:27.489350  210052 docker.go:318] overlay module found
	I1006 19:56:27.492599  210052 out.go:179] * Using the docker driver based on user configuration
	I1006 19:56:27.495496  210052 start.go:304] selected driver: docker
	I1006 19:56:27.495517  210052 start.go:924] validating driver "docker" against <nil>
	I1006 19:56:27.495532  210052 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 19:56:27.496473  210052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:56:27.555294  210052 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:56:27.545401549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:56:27.555465  210052 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1006 19:56:27.555502  210052 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1006 19:56:27.555808  210052 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1006 19:56:27.558387  210052 out.go:179] * Using Docker driver with root privileges
	I1006 19:56:27.561205  210052 cni.go:84] Creating CNI manager for ""
	I1006 19:56:27.561271  210052 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:56:27.561286  210052 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 19:56:27.561359  210052 start.go:348] cluster config:
	{Name:newest-cni-988436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-988436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:56:27.566436  210052 out.go:179] * Starting "newest-cni-988436" primary control-plane node in "newest-cni-988436" cluster
	I1006 19:56:27.569272  210052 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 19:56:27.572353  210052 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 19:56:27.575074  210052 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:56:27.575132  210052 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1006 19:56:27.575156  210052 cache.go:58] Caching tarball of preloaded images
	I1006 19:56:27.575178  210052 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 19:56:27.575243  210052 preload.go:233] Found /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1006 19:56:27.575259  210052 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 19:56:27.575366  210052 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/config.json ...
	I1006 19:56:27.575382  210052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/config.json: {Name:mkf9eff8c85abad9a584a9ab3fd004384c67d223 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:56:27.598586  210052 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 19:56:27.598610  210052 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 19:56:27.598630  210052 cache.go:232] Successfully downloaded all kic artifacts
	I1006 19:56:27.598653  210052 start.go:360] acquireMachinesLock for newest-cni-988436: {Name:mk73775a9b90360fc78b4ca045cf6f7e4dbc2ba0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 19:56:27.598754  210052 start.go:364] duration metric: took 79.723µs to acquireMachinesLock for "newest-cni-988436"
	I1006 19:56:27.598784  210052 start.go:93] Provisioning new machine with config: &{Name:newest-cni-988436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-988436 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 19:56:27.598852  210052 start.go:125] createHost starting for "" (driver="docker")
	W1006 19:56:24.581465  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	W1006 19:56:26.589768  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	I1006 19:56:27.602304  210052 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1006 19:56:27.602540  210052 start.go:159] libmachine.API.Create for "newest-cni-988436" (driver="docker")
	I1006 19:56:27.602587  210052 client.go:168] LocalClient.Create starting
	I1006 19:56:27.602671  210052 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem
	I1006 19:56:27.602710  210052 main.go:141] libmachine: Decoding PEM data...
	I1006 19:56:27.602733  210052 main.go:141] libmachine: Parsing certificate...
	I1006 19:56:27.602785  210052 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem
	I1006 19:56:27.602813  210052 main.go:141] libmachine: Decoding PEM data...
	I1006 19:56:27.602834  210052 main.go:141] libmachine: Parsing certificate...
	I1006 19:56:27.603211  210052 cli_runner.go:164] Run: docker network inspect newest-cni-988436 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 19:56:27.619227  210052 cli_runner.go:211] docker network inspect newest-cni-988436 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 19:56:27.619302  210052 network_create.go:284] running [docker network inspect newest-cni-988436] to gather additional debugging logs...
	I1006 19:56:27.619318  210052 cli_runner.go:164] Run: docker network inspect newest-cni-988436
	W1006 19:56:27.634402  210052 cli_runner.go:211] docker network inspect newest-cni-988436 returned with exit code 1
	I1006 19:56:27.634430  210052 network_create.go:287] error running [docker network inspect newest-cni-988436]: docker network inspect newest-cni-988436: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-988436 not found
	I1006 19:56:27.634443  210052 network_create.go:289] output of [docker network inspect newest-cni-988436]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-988436 not found
	
	** /stderr **
	I1006 19:56:27.634577  210052 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:56:27.652316  210052 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7058eae896da IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:57:c0:bd:de:1b} reservation:<nil>}
	I1006 19:56:27.652636  210052 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e321ce8cf6dc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:12:e6:76:87:fe:bb} reservation:<nil>}
	I1006 19:56:27.652990  210052 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-17bdbd7bd7be IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:22:93:f3:19:55:10} reservation:<nil>}
	I1006 19:56:27.653260  210052 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-2e10a72004c0 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:16:2c:92:d4:96:5e} reservation:<nil>}
	I1006 19:56:27.653739  210052 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a4f4b0}
	I1006 19:56:27.653765  210052 network_create.go:124] attempt to create docker network newest-cni-988436 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1006 19:56:27.653837  210052 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-988436 newest-cni-988436
	I1006 19:56:27.727009  210052 network_create.go:108] docker network newest-cni-988436 192.168.85.0/24 created
	I1006 19:56:27.727040  210052 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-988436" container
	I1006 19:56:27.727115  210052 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 19:56:27.744832  210052 cli_runner.go:164] Run: docker volume create newest-cni-988436 --label name.minikube.sigs.k8s.io=newest-cni-988436 --label created_by.minikube.sigs.k8s.io=true
	I1006 19:56:27.764605  210052 oci.go:103] Successfully created a docker volume newest-cni-988436
	I1006 19:56:27.764702  210052 cli_runner.go:164] Run: docker run --rm --name newest-cni-988436-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-988436 --entrypoint /usr/bin/test -v newest-cni-988436:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 19:56:28.312473  210052 oci.go:107] Successfully prepared a docker volume newest-cni-988436
	I1006 19:56:28.312516  210052 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:56:28.312535  210052 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 19:56:28.312602  210052 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-988436:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	W1006 19:56:29.081881  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	W1006 19:56:31.082305  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	W1006 19:56:33.580892  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	I1006 19:56:32.785958  210052 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-988436:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.473321259s)
	I1006 19:56:32.785991  210052 kic.go:203] duration metric: took 4.473452732s to extract preloaded images to volume ...
	W1006 19:56:32.786143  210052 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1006 19:56:32.786248  210052 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 19:56:32.838133  210052 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-988436 --name newest-cni-988436 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-988436 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-988436 --network newest-cni-988436 --ip 192.168.85.2 --volume newest-cni-988436:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 19:56:33.126638  210052 cli_runner.go:164] Run: docker container inspect newest-cni-988436 --format={{.State.Running}}
	I1006 19:56:33.162178  210052 cli_runner.go:164] Run: docker container inspect newest-cni-988436 --format={{.State.Status}}
	I1006 19:56:33.180774  210052 cli_runner.go:164] Run: docker exec newest-cni-988436 stat /var/lib/dpkg/alternatives/iptables
	I1006 19:56:33.225510  210052 oci.go:144] the created container "newest-cni-988436" has a running status.
	I1006 19:56:33.225548  210052 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa...
	I1006 19:56:33.431069  210052 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 19:56:33.466652  210052 cli_runner.go:164] Run: docker container inspect newest-cni-988436 --format={{.State.Status}}
	I1006 19:56:33.499299  210052 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 19:56:33.499320  210052 kic_runner.go:114] Args: [docker exec --privileged newest-cni-988436 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 19:56:33.598929  210052 cli_runner.go:164] Run: docker container inspect newest-cni-988436 --format={{.State.Status}}
	I1006 19:56:33.618475  210052 machine.go:93] provisionDockerMachine start ...
	I1006 19:56:33.618580  210052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:56:33.637976  210052 main.go:141] libmachine: Using SSH client type: native
	I1006 19:56:33.638310  210052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1006 19:56:33.638320  210052 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 19:56:33.638991  210052 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1006 19:56:36.775541  210052 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-988436
	
	I1006 19:56:36.775565  210052 ubuntu.go:182] provisioning hostname "newest-cni-988436"
	I1006 19:56:36.775630  210052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:56:36.793876  210052 main.go:141] libmachine: Using SSH client type: native
	I1006 19:56:36.794186  210052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1006 19:56:36.794204  210052 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-988436 && echo "newest-cni-988436" | sudo tee /etc/hostname
	I1006 19:56:36.938442  210052 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-988436
	
	I1006 19:56:36.938536  210052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:56:36.957972  210052 main.go:141] libmachine: Using SSH client type: native
	I1006 19:56:36.958283  210052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1006 19:56:36.958303  210052 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-988436' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-988436/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-988436' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 19:56:37.100192  210052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 19:56:37.100237  210052 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-2540/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-2540/.minikube}
	I1006 19:56:37.100265  210052 ubuntu.go:190] setting up certificates
	I1006 19:56:37.100274  210052 provision.go:84] configureAuth start
	I1006 19:56:37.100346  210052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-988436
	I1006 19:56:37.119295  210052 provision.go:143] copyHostCerts
	I1006 19:56:37.119371  210052 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem, removing ...
	I1006 19:56:37.119386  210052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem
	I1006 19:56:37.119468  210052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem (1082 bytes)
	I1006 19:56:37.119564  210052 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem, removing ...
	I1006 19:56:37.119574  210052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem
	I1006 19:56:37.119601  210052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem (1123 bytes)
	I1006 19:56:37.119659  210052 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem, removing ...
	I1006 19:56:37.119668  210052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem
	I1006 19:56:37.119693  210052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem (1675 bytes)
	I1006 19:56:37.119772  210052 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem org=jenkins.newest-cni-988436 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-988436]
	W1006 19:56:36.080827  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	W1006 19:56:38.082444  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	I1006 19:56:37.426302  210052 provision.go:177] copyRemoteCerts
	I1006 19:56:37.426366  210052 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 19:56:37.426403  210052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:56:37.457518  210052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa Username:docker}
	I1006 19:56:37.559808  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 19:56:37.581588  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1006 19:56:37.599859  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 19:56:37.617035  210052 provision.go:87] duration metric: took 516.722506ms to configureAuth
	I1006 19:56:37.617133  210052 ubuntu.go:206] setting minikube options for container-runtime
	I1006 19:56:37.617342  210052 config.go:182] Loaded profile config "newest-cni-988436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:56:37.617474  210052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:56:37.637667  210052 main.go:141] libmachine: Using SSH client type: native
	I1006 19:56:37.637970  210052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1006 19:56:37.637985  210052 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 19:56:37.910498  210052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 19:56:37.910525  210052 machine.go:96] duration metric: took 4.292030507s to provisionDockerMachine
	I1006 19:56:37.910535  210052 client.go:171] duration metric: took 10.307937103s to LocalClient.Create
	I1006 19:56:37.910545  210052 start.go:167] duration metric: took 10.308006348s to libmachine.API.Create "newest-cni-988436"
	I1006 19:56:37.910552  210052 start.go:293] postStartSetup for "newest-cni-988436" (driver="docker")
	I1006 19:56:37.910562  210052 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 19:56:37.910633  210052 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 19:56:37.910678  210052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:56:37.928551  210052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa Username:docker}
	I1006 19:56:38.033048  210052 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 19:56:38.037223  210052 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 19:56:38.037253  210052 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 19:56:38.037267  210052 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/addons for local assets ...
	I1006 19:56:38.037332  210052 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/files for local assets ...
	I1006 19:56:38.037421  210052 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> 43502.pem in /etc/ssl/certs
	I1006 19:56:38.037525  210052 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 19:56:38.046942  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:56:38.068019  210052 start.go:296] duration metric: took 157.450178ms for postStartSetup
	I1006 19:56:38.068422  210052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-988436
	I1006 19:56:38.092234  210052 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/config.json ...
	I1006 19:56:38.092529  210052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 19:56:38.092583  210052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:56:38.109788  210052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa Username:docker}
	I1006 19:56:38.205124  210052 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 19:56:38.209969  210052 start.go:128] duration metric: took 10.611099752s to createHost
	I1006 19:56:38.209995  210052 start.go:83] releasing machines lock for "newest-cni-988436", held for 10.611228173s
	I1006 19:56:38.210094  210052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-988436
	I1006 19:56:38.227465  210052 ssh_runner.go:195] Run: cat /version.json
	I1006 19:56:38.227522  210052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:56:38.227785  210052 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 19:56:38.227850  210052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:56:38.247799  210052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa Username:docker}
	I1006 19:56:38.256254  210052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa Username:docker}
	I1006 19:56:38.359242  210052 ssh_runner.go:195] Run: systemctl --version
	I1006 19:56:38.472099  210052 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 19:56:38.511599  210052 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 19:56:38.517885  210052 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 19:56:38.517960  210052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 19:56:38.555546  210052 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1006 19:56:38.555567  210052 start.go:495] detecting cgroup driver to use...
	I1006 19:56:38.555604  210052 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 19:56:38.555656  210052 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 19:56:38.575462  210052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 19:56:38.593386  210052 docker.go:218] disabling cri-docker service (if available) ...
	I1006 19:56:38.593486  210052 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 19:56:38.610517  210052 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 19:56:38.629288  210052 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 19:56:38.760103  210052 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 19:56:38.890811  210052 docker.go:234] disabling docker service ...
	I1006 19:56:38.890902  210052 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 19:56:38.918134  210052 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 19:56:38.931520  210052 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 19:56:39.053585  210052 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 19:56:39.181254  210052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 19:56:39.194464  210052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 19:56:39.209553  210052 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 19:56:39.209680  210052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:56:39.218804  210052 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 19:56:39.218876  210052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:56:39.228509  210052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:56:39.237269  210052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:56:39.245888  210052 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 19:56:39.254513  210052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:56:39.263406  210052 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:56:39.279311  210052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:56:39.288367  210052 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 19:56:39.296256  210052 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 19:56:39.303886  210052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:56:39.425912  210052 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 19:56:39.569993  210052 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 19:56:39.570070  210052 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 19:56:39.574526  210052 start.go:563] Will wait 60s for crictl version
	I1006 19:56:39.574592  210052 ssh_runner.go:195] Run: which crictl
	I1006 19:56:39.580184  210052 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 19:56:39.607742  210052 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 19:56:39.607886  210052 ssh_runner.go:195] Run: crio --version
	I1006 19:56:39.641389  210052 ssh_runner.go:195] Run: crio --version
	I1006 19:56:39.675076  210052 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 19:56:39.677929  210052 cli_runner.go:164] Run: docker network inspect newest-cni-988436 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:56:39.696516  210052 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1006 19:56:39.700340  210052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:56:39.713395  210052 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1006 19:56:39.716246  210052 kubeadm.go:883] updating cluster {Name:newest-cni-988436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-988436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 19:56:39.716412  210052 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:56:39.716509  210052 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:56:39.755113  210052 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:56:39.755135  210052 crio.go:433] Images already preloaded, skipping extraction
	I1006 19:56:39.755196  210052 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:56:39.787126  210052 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:56:39.787196  210052 cache_images.go:85] Images are preloaded, skipping loading
	I1006 19:56:39.787216  210052 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1006 19:56:39.787335  210052 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-988436 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-988436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 19:56:39.787460  210052 ssh_runner.go:195] Run: crio config
	I1006 19:56:39.863595  210052 cni.go:84] Creating CNI manager for ""
	I1006 19:56:39.863618  210052 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:56:39.863637  210052 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1006 19:56:39.863661  210052 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-988436 NodeName:newest-cni-988436 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 19:56:39.863818  210052 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-988436"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 19:56:39.863893  210052 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 19:56:39.872433  210052 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 19:56:39.872511  210052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 19:56:39.880354  210052 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1006 19:56:39.894827  210052 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 19:56:39.908285  210052 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1006 19:56:39.921983  210052 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1006 19:56:39.925561  210052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:56:39.936397  210052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:56:40.085605  210052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:56:40.106839  210052 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436 for IP: 192.168.85.2
	I1006 19:56:40.106915  210052 certs.go:195] generating shared ca certs ...
	I1006 19:56:40.106947  210052 certs.go:227] acquiring lock for ca certs: {Name:mke29bef1b13829052576090d66d8864f7cbc64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:56:40.107150  210052 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key
	I1006 19:56:40.107238  210052 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key
	I1006 19:56:40.107268  210052 certs.go:257] generating profile certs ...
	I1006 19:56:40.107358  210052 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/client.key
	I1006 19:56:40.107407  210052 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/client.crt with IP's: []
	I1006 19:56:40.665462  210052 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/client.crt ...
	I1006 19:56:40.665496  210052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/client.crt: {Name:mk82863bb472ecb3697b17bef486db7975bf391d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:56:40.665718  210052 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/client.key ...
	I1006 19:56:40.665731  210052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/client.key: {Name:mkdc8e2f4c6d136d3005485915301015a1703192 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:56:40.665828  210052 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.key.1ee6693d
	I1006 19:56:40.665846  210052 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.crt.1ee6693d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1006 19:56:40.921794  210052 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.crt.1ee6693d ...
	I1006 19:56:40.921825  210052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.crt.1ee6693d: {Name:mkc191190d9fb7bfe77b9015024577fb844fb7a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:56:40.922002  210052 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.key.1ee6693d ...
	I1006 19:56:40.922015  210052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.key.1ee6693d: {Name:mke12ff7d149e4967c0b82c9911c4e4a149defd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:56:40.922097  210052 certs.go:382] copying /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.crt.1ee6693d -> /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.crt
	I1006 19:56:40.922176  210052 certs.go:386] copying /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.key.1ee6693d -> /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.key
	I1006 19:56:40.922234  210052 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/proxy-client.key
	I1006 19:56:40.922253  210052 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/proxy-client.crt with IP's: []
	I1006 19:56:41.223580  210052 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/proxy-client.crt ...
	I1006 19:56:41.223606  210052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/proxy-client.crt: {Name:mka69f8bb4fd0f5d11ba2f11d7c7c8672761b518 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:56:41.223790  210052 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/proxy-client.key ...
	I1006 19:56:41.223809  210052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/proxy-client.key: {Name:mkc4b431cd69cb7113ec8abbadf217828f8fe347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:56:41.223992  210052 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem (1338 bytes)
	W1006 19:56:41.224034  210052 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350_empty.pem, impossibly tiny 0 bytes
	I1006 19:56:41.224048  210052 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 19:56:41.224072  210052 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem (1082 bytes)
	I1006 19:56:41.224100  210052 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem (1123 bytes)
	I1006 19:56:41.224125  210052 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem (1675 bytes)
	I1006 19:56:41.224181  210052 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:56:41.224776  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 19:56:41.243540  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 19:56:41.263322  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 19:56:41.282749  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 19:56:41.301198  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 19:56:41.320838  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 19:56:41.338332  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 19:56:41.357382  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 19:56:41.375828  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem --> /usr/share/ca-certificates/4350.pem (1338 bytes)
	I1006 19:56:41.399766  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /usr/share/ca-certificates/43502.pem (1708 bytes)
	I1006 19:56:41.419789  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 19:56:41.439316  210052 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 19:56:41.453030  210052 ssh_runner.go:195] Run: openssl version
	I1006 19:56:41.460587  210052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4350.pem && ln -fs /usr/share/ca-certificates/4350.pem /etc/ssl/certs/4350.pem"
	I1006 19:56:41.470142  210052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4350.pem
	I1006 19:56:41.473980  210052 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 18:49 /usr/share/ca-certificates/4350.pem
	I1006 19:56:41.474040  210052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4350.pem
	I1006 19:56:41.520150  210052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4350.pem /etc/ssl/certs/51391683.0"
	I1006 19:56:41.529157  210052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43502.pem && ln -fs /usr/share/ca-certificates/43502.pem /etc/ssl/certs/43502.pem"
	I1006 19:56:41.540617  210052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43502.pem
	I1006 19:56:41.547943  210052 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 18:49 /usr/share/ca-certificates/43502.pem
	I1006 19:56:41.548006  210052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43502.pem
	I1006 19:56:41.598583  210052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43502.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 19:56:41.607325  210052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 19:56:41.615729  210052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:56:41.619237  210052 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:56:41.619306  210052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:56:41.661642  210052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 19:56:41.670098  210052 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 19:56:41.673806  210052 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 19:56:41.673886  210052 kubeadm.go:400] StartCluster: {Name:newest-cni-988436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-988436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:56:41.673987  210052 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 19:56:41.674051  210052 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 19:56:41.710128  210052 cri.go:89] found id: ""
	I1006 19:56:41.710278  210052 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 19:56:41.718440  210052 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 19:56:41.729365  210052 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 19:56:41.729492  210052 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 19:56:41.737890  210052 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 19:56:41.737910  210052 kubeadm.go:157] found existing configuration files:
	
	I1006 19:56:41.737995  210052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 19:56:41.746054  210052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 19:56:41.746206  210052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 19:56:41.754054  210052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 19:56:41.762051  210052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 19:56:41.762169  210052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 19:56:41.769817  210052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 19:56:41.777853  210052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 19:56:41.777972  210052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 19:56:41.785311  210052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 19:56:41.793390  210052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 19:56:41.793463  210052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 19:56:41.801532  210052 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 19:56:41.846098  210052 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 19:56:41.846499  210052 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 19:56:41.874009  210052 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 19:56:41.874097  210052 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1006 19:56:41.874140  210052 kubeadm.go:318] OS: Linux
	I1006 19:56:41.874201  210052 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 19:56:41.874263  210052 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1006 19:56:41.874323  210052 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 19:56:41.874385  210052 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 19:56:41.874440  210052 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 19:56:41.874499  210052 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 19:56:41.874558  210052 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 19:56:41.874623  210052 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 19:56:41.874696  210052 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1006 19:56:41.957610  210052 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 19:56:41.957741  210052 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 19:56:41.957847  210052 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 19:56:41.965754  210052 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 19:56:41.972299  210052 out.go:252]   - Generating certificates and keys ...
	I1006 19:56:41.972404  210052 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 19:56:41.972476  210052 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 19:56:42.188994  210052 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	W1006 19:56:40.089093  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	W1006 19:56:42.580996  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	I1006 19:56:43.268934  210052 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 19:56:43.761318  210052 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 19:56:44.281673  210052 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 19:56:44.552377  210052 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 19:56:44.552956  210052 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-988436] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1006 19:56:44.927974  210052 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 19:56:44.928125  210052 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-988436] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1006 19:56:45.729706  210052 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 19:56:46.699422  210052 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 19:56:47.267978  210052 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 19:56:47.268279  210052 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	W1006 19:56:44.582051  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	I1006 19:56:46.081891  205530 node_ready.go:49] node "default-k8s-diff-port-997276" is "Ready"
	I1006 19:56:46.081929  205530 node_ready.go:38] duration metric: took 40.004115031s for node "default-k8s-diff-port-997276" to be "Ready" ...
	I1006 19:56:46.081943  205530 api_server.go:52] waiting for apiserver process to appear ...
	I1006 19:56:46.082002  205530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 19:56:46.106981  205530 api_server.go:72] duration metric: took 41.394476453s to wait for apiserver process to appear ...
	I1006 19:56:46.107010  205530 api_server.go:88] waiting for apiserver healthz status ...
	I1006 19:56:46.107046  205530 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1006 19:56:46.116095  205530 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1006 19:56:46.117395  205530 api_server.go:141] control plane version: v1.34.1
	I1006 19:56:46.117424  205530 api_server.go:131] duration metric: took 10.406615ms to wait for apiserver health ...
	I1006 19:56:46.117433  205530 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 19:56:46.130036  205530 system_pods.go:59] 8 kube-system pods found
	I1006 19:56:46.130077  205530 system_pods.go:61] "coredns-66bc5c9577-bns67" [89f11c5e-2682-4227-80ba-2fe8b97c1629] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:56:46.130084  205530 system_pods.go:61] "etcd-default-k8s-diff-port-997276" [721c87e6-60f9-41e8-ac17-f6029a04e17e] Running
	I1006 19:56:46.130090  205530 system_pods.go:61] "kindnet-twtwt" [e281e8b3-9cc4-41fb-8e22-a66ef4e23a38] Running
	I1006 19:56:46.130099  205530 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-997276" [7d10547b-e1b4-4af4-8da7-f4d28894afd4] Running
	I1006 19:56:46.130104  205530 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-997276" [a477444a-eb40-45c8-8588-842a7de0164e] Running
	I1006 19:56:46.130108  205530 system_pods.go:61] "kube-proxy-zl7gg" [05397544-ebaf-4f98-8762-9ede9c706bc9] Running
	I1006 19:56:46.130113  205530 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-997276" [1b4b8d6e-f4d6-40f1-99eb-2d33fc3533a5] Running
	I1006 19:56:46.130123  205530 system_pods.go:61] "storage-provisioner" [3cd050f9-3953-4804-bbda-79ae9e50cf67] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 19:56:46.130130  205530 system_pods.go:74] duration metric: took 12.691146ms to wait for pod list to return data ...
	I1006 19:56:46.130144  205530 default_sa.go:34] waiting for default service account to be created ...
	I1006 19:56:46.136990  205530 default_sa.go:45] found service account: "default"
	I1006 19:56:46.137018  205530 default_sa.go:55] duration metric: took 6.867335ms for default service account to be created ...
	I1006 19:56:46.137028  205530 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 19:56:46.143156  205530 system_pods.go:86] 8 kube-system pods found
	I1006 19:56:46.143193  205530 system_pods.go:89] "coredns-66bc5c9577-bns67" [89f11c5e-2682-4227-80ba-2fe8b97c1629] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:56:46.143200  205530 system_pods.go:89] "etcd-default-k8s-diff-port-997276" [721c87e6-60f9-41e8-ac17-f6029a04e17e] Running
	I1006 19:56:46.143207  205530 system_pods.go:89] "kindnet-twtwt" [e281e8b3-9cc4-41fb-8e22-a66ef4e23a38] Running
	I1006 19:56:46.143211  205530 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-997276" [7d10547b-e1b4-4af4-8da7-f4d28894afd4] Running
	I1006 19:56:46.143216  205530 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-997276" [a477444a-eb40-45c8-8588-842a7de0164e] Running
	I1006 19:56:46.143220  205530 system_pods.go:89] "kube-proxy-zl7gg" [05397544-ebaf-4f98-8762-9ede9c706bc9] Running
	I1006 19:56:46.143224  205530 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-997276" [1b4b8d6e-f4d6-40f1-99eb-2d33fc3533a5] Running
	I1006 19:56:46.143229  205530 system_pods.go:89] "storage-provisioner" [3cd050f9-3953-4804-bbda-79ae9e50cf67] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 19:56:46.143251  205530 retry.go:31] will retry after 288.210543ms: missing components: kube-dns
	I1006 19:56:46.435799  205530 system_pods.go:86] 8 kube-system pods found
	I1006 19:56:46.435836  205530 system_pods.go:89] "coredns-66bc5c9577-bns67" [89f11c5e-2682-4227-80ba-2fe8b97c1629] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:56:46.435843  205530 system_pods.go:89] "etcd-default-k8s-diff-port-997276" [721c87e6-60f9-41e8-ac17-f6029a04e17e] Running
	I1006 19:56:46.435850  205530 system_pods.go:89] "kindnet-twtwt" [e281e8b3-9cc4-41fb-8e22-a66ef4e23a38] Running
	I1006 19:56:46.435855  205530 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-997276" [7d10547b-e1b4-4af4-8da7-f4d28894afd4] Running
	I1006 19:56:46.435859  205530 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-997276" [a477444a-eb40-45c8-8588-842a7de0164e] Running
	I1006 19:56:46.435864  205530 system_pods.go:89] "kube-proxy-zl7gg" [05397544-ebaf-4f98-8762-9ede9c706bc9] Running
	I1006 19:56:46.435870  205530 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-997276" [1b4b8d6e-f4d6-40f1-99eb-2d33fc3533a5] Running
	I1006 19:56:46.435887  205530 system_pods.go:89] "storage-provisioner" [3cd050f9-3953-4804-bbda-79ae9e50cf67] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 19:56:46.435906  205530 retry.go:31] will retry after 262.137696ms: missing components: kube-dns
	I1006 19:56:46.704260  205530 system_pods.go:86] 8 kube-system pods found
	I1006 19:56:46.704293  205530 system_pods.go:89] "coredns-66bc5c9577-bns67" [89f11c5e-2682-4227-80ba-2fe8b97c1629] Running
	I1006 19:56:46.704304  205530 system_pods.go:89] "etcd-default-k8s-diff-port-997276" [721c87e6-60f9-41e8-ac17-f6029a04e17e] Running
	I1006 19:56:46.704321  205530 system_pods.go:89] "kindnet-twtwt" [e281e8b3-9cc4-41fb-8e22-a66ef4e23a38] Running
	I1006 19:56:46.704330  205530 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-997276" [7d10547b-e1b4-4af4-8da7-f4d28894afd4] Running
	I1006 19:56:46.704336  205530 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-997276" [a477444a-eb40-45c8-8588-842a7de0164e] Running
	I1006 19:56:46.704340  205530 system_pods.go:89] "kube-proxy-zl7gg" [05397544-ebaf-4f98-8762-9ede9c706bc9] Running
	I1006 19:56:46.704344  205530 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-997276" [1b4b8d6e-f4d6-40f1-99eb-2d33fc3533a5] Running
	I1006 19:56:46.704352  205530 system_pods.go:89] "storage-provisioner" [3cd050f9-3953-4804-bbda-79ae9e50cf67] Running
	I1006 19:56:46.704360  205530 system_pods.go:126] duration metric: took 567.32567ms to wait for k8s-apps to be running ...
	I1006 19:56:46.704372  205530 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 19:56:46.704427  205530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:56:46.726522  205530 system_svc.go:56] duration metric: took 22.139155ms WaitForService to wait for kubelet
	I1006 19:56:46.726552  205530 kubeadm.go:586] duration metric: took 42.014051673s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 19:56:46.726570  205530 node_conditions.go:102] verifying NodePressure condition ...
	I1006 19:56:46.730502  205530 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 19:56:46.730536  205530 node_conditions.go:123] node cpu capacity is 2
	I1006 19:56:46.730560  205530 node_conditions.go:105] duration metric: took 3.984762ms to run NodePressure ...
	I1006 19:56:46.730573  205530 start.go:241] waiting for startup goroutines ...
	I1006 19:56:46.730585  205530 start.go:246] waiting for cluster config update ...
	I1006 19:56:46.730597  205530 start.go:255] writing updated cluster config ...
	I1006 19:56:46.730919  205530 ssh_runner.go:195] Run: rm -f paused
	I1006 19:56:46.734940  205530 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 19:56:46.740056  205530 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bns67" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:46.746379  205530 pod_ready.go:94] pod "coredns-66bc5c9577-bns67" is "Ready"
	I1006 19:56:46.746414  205530 pod_ready.go:86] duration metric: took 6.327633ms for pod "coredns-66bc5c9577-bns67" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:46.749494  205530 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:46.756307  205530 pod_ready.go:94] pod "etcd-default-k8s-diff-port-997276" is "Ready"
	I1006 19:56:46.756334  205530 pod_ready.go:86] duration metric: took 6.800322ms for pod "etcd-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:46.759272  205530 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:46.765452  205530 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-997276" is "Ready"
	I1006 19:56:46.765481  205530 pod_ready.go:86] duration metric: took 6.18555ms for pod "kube-apiserver-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:46.768411  205530 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:47.141709  205530 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-997276" is "Ready"
	I1006 19:56:47.141785  205530 pod_ready.go:86] duration metric: took 373.345577ms for pod "kube-controller-manager-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:47.341356  205530 pod_ready.go:83] waiting for pod "kube-proxy-zl7gg" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:47.740720  205530 pod_ready.go:94] pod "kube-proxy-zl7gg" is "Ready"
	I1006 19:56:47.740750  205530 pod_ready.go:86] duration metric: took 399.314802ms for pod "kube-proxy-zl7gg" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:47.940755  205530 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:48.340582  205530 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-997276" is "Ready"
	I1006 19:56:48.340607  205530 pod_ready.go:86] duration metric: took 399.828626ms for pod "kube-scheduler-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:48.340619  205530 pod_ready.go:40] duration metric: took 1.605638798s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 19:56:48.424763  205530 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1006 19:56:48.429795  205530 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-997276" cluster and "default" namespace by default
	I1006 19:56:48.784916  210052 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 19:56:49.145875  210052 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 19:56:50.005498  210052 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 19:56:50.385840  210052 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 19:56:50.864684  210052 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 19:56:50.864791  210052 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 19:56:50.867169  210052 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 19:56:50.871232  210052 out.go:252]   - Booting up control plane ...
	I1006 19:56:50.871355  210052 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 19:56:50.871439  210052 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 19:56:50.871512  210052 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 19:56:50.895846  210052 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 19:56:50.895967  210052 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 19:56:50.901137  210052 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 19:56:50.901471  210052 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 19:56:50.901521  210052 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 19:56:51.052595  210052 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 19:56:51.052978  210052 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 19:56:52.554493  210052 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501866122s
	I1006 19:56:52.558131  210052 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 19:56:52.558230  210052 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1006 19:56:52.558324  210052 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 19:56:52.558630  210052 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 19:56:56.272199  210052 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.713406623s
	
	
	==> CRI-O <==
	Oct 06 19:56:46 default-k8s-diff-port-997276 crio[841]: time="2025-10-06T19:56:46.218644907Z" level=info msg="Created container 2be3d0c826cd113963dbfdeef974e352f9de6e1c537c7a42bb4f800756953864: kube-system/coredns-66bc5c9577-bns67/coredns" id=1a4b18b7-1c62-40b1-87a1-c34485d649c4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:56:46 default-k8s-diff-port-997276 crio[841]: time="2025-10-06T19:56:46.219862275Z" level=info msg="Starting container: 2be3d0c826cd113963dbfdeef974e352f9de6e1c537c7a42bb4f800756953864" id=b5e14d14-5df7-4c2f-9104-b6bd6a71f8f2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 19:56:46 default-k8s-diff-port-997276 crio[841]: time="2025-10-06T19:56:46.225114966Z" level=info msg="Started container" PID=1751 containerID=2be3d0c826cd113963dbfdeef974e352f9de6e1c537c7a42bb4f800756953864 description=kube-system/coredns-66bc5c9577-bns67/coredns id=b5e14d14-5df7-4c2f-9104-b6bd6a71f8f2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a00225e0e16ef1b7a3b3c0feb7fc09b9ea0b37e6279332b30654c716b3e4725f
	Oct 06 19:56:49 default-k8s-diff-port-997276 crio[841]: time="2025-10-06T19:56:49.043072849Z" level=info msg="Running pod sandbox: default/busybox/POD" id=0374ce73-5fe7-44e9-b417-489ecd450674 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 19:56:49 default-k8s-diff-port-997276 crio[841]: time="2025-10-06T19:56:49.043152489Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:56:49 default-k8s-diff-port-997276 crio[841]: time="2025-10-06T19:56:49.060102479Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a78605bc1fe4e819f5d5f5ccc7d5a1264fc57b2dcf150a812b2a104b192385a1 UID:53963bee-94d1-4ca1-8020-154e6f994193 NetNS:/var/run/netns/f50ca7e3-0404-49fb-9c5e-21265c49e944 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000791e0}] Aliases:map[]}"
	Oct 06 19:56:49 default-k8s-diff-port-997276 crio[841]: time="2025-10-06T19:56:49.060160753Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 06 19:56:49 default-k8s-diff-port-997276 crio[841]: time="2025-10-06T19:56:49.082296675Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a78605bc1fe4e819f5d5f5ccc7d5a1264fc57b2dcf150a812b2a104b192385a1 UID:53963bee-94d1-4ca1-8020-154e6f994193 NetNS:/var/run/netns/f50ca7e3-0404-49fb-9c5e-21265c49e944 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000791e0}] Aliases:map[]}"
	Oct 06 19:56:49 default-k8s-diff-port-997276 crio[841]: time="2025-10-06T19:56:49.08259968Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 06 19:56:49 default-k8s-diff-port-997276 crio[841]: time="2025-10-06T19:56:49.086142235Z" level=info msg="Ran pod sandbox a78605bc1fe4e819f5d5f5ccc7d5a1264fc57b2dcf150a812b2a104b192385a1 with infra container: default/busybox/POD" id=0374ce73-5fe7-44e9-b417-489ecd450674 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 19:56:49 default-k8s-diff-port-997276 crio[841]: time="2025-10-06T19:56:49.088965893Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=31cdff39-8851-4bbd-988f-f74d15c6b479 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:56:49 default-k8s-diff-port-997276 crio[841]: time="2025-10-06T19:56:49.089174332Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=31cdff39-8851-4bbd-988f-f74d15c6b479 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:56:49 default-k8s-diff-port-997276 crio[841]: time="2025-10-06T19:56:49.089273017Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=31cdff39-8851-4bbd-988f-f74d15c6b479 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:56:49 default-k8s-diff-port-997276 crio[841]: time="2025-10-06T19:56:49.092928091Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c8364625-0436-47f3-98ce-aa66c3677e0f name=/runtime.v1.ImageService/PullImage
	Oct 06 19:56:49 default-k8s-diff-port-997276 crio[841]: time="2025-10-06T19:56:49.096126476Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 06 19:56:51 default-k8s-diff-port-997276 crio[841]: time="2025-10-06T19:56:51.077326486Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=c8364625-0436-47f3-98ce-aa66c3677e0f name=/runtime.v1.ImageService/PullImage
	Oct 06 19:56:51 default-k8s-diff-port-997276 crio[841]: time="2025-10-06T19:56:51.078312671Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3289d6af-e1b4-4b07-91eb-537f2e12d15b name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:56:51 default-k8s-diff-port-997276 crio[841]: time="2025-10-06T19:56:51.082193004Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=eba3edf9-4258-4b7d-921f-46670a9dccaa name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:56:51 default-k8s-diff-port-997276 crio[841]: time="2025-10-06T19:56:51.089423496Z" level=info msg="Creating container: default/busybox/busybox" id=55930747-9971-4e72-9b72-57489a0e4b60 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:56:51 default-k8s-diff-port-997276 crio[841]: time="2025-10-06T19:56:51.090671199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:56:51 default-k8s-diff-port-997276 crio[841]: time="2025-10-06T19:56:51.095864813Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:56:51 default-k8s-diff-port-997276 crio[841]: time="2025-10-06T19:56:51.09639096Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:56:51 default-k8s-diff-port-997276 crio[841]: time="2025-10-06T19:56:51.118575581Z" level=info msg="Created container 42442fe1af8d186d4582c97e6dfc099eb18f8de131bf8ddea252d337d498a867: default/busybox/busybox" id=55930747-9971-4e72-9b72-57489a0e4b60 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:56:51 default-k8s-diff-port-997276 crio[841]: time="2025-10-06T19:56:51.119653705Z" level=info msg="Starting container: 42442fe1af8d186d4582c97e6dfc099eb18f8de131bf8ddea252d337d498a867" id=6cb8f0dd-1d9d-4e5d-bd5b-500a705875e9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 19:56:51 default-k8s-diff-port-997276 crio[841]: time="2025-10-06T19:56:51.122389Z" level=info msg="Started container" PID=1802 containerID=42442fe1af8d186d4582c97e6dfc099eb18f8de131bf8ddea252d337d498a867 description=default/busybox/busybox id=6cb8f0dd-1d9d-4e5d-bd5b-500a705875e9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a78605bc1fe4e819f5d5f5ccc7d5a1264fc57b2dcf150a812b2a104b192385a1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	42442fe1af8d1       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   a78605bc1fe4e       busybox                                                default
	2be3d0c826cd1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   a00225e0e16ef       coredns-66bc5c9577-bns67                               kube-system
	002ec5f47198f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   fca7b2295f17d       storage-provisioner                                    kube-system
	f4844f7fc2d4f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   62253b29242ff       kube-proxy-zl7gg                                       kube-system
	a1f77b9746fcd       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   e0e4fd5c59d12       kindnet-twtwt                                          kube-system
	b12e88a3c975c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   40b0536005a5b       etcd-default-k8s-diff-port-997276                      kube-system
	bba521e87edec       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   3745261e39055       kube-apiserver-default-k8s-diff-port-997276            kube-system
	381529995c2b7       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   1b2149c40585c       kube-controller-manager-default-k8s-diff-port-997276   kube-system
	d7bb5c9806ea2       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   247e25fa02b68       kube-scheduler-default-k8s-diff-port-997276            kube-system
	
	
	==> coredns [2be3d0c826cd113963dbfdeef974e352f9de6e1c537c7a42bb4f800756953864] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43043 - 20197 "HINFO IN 3694572592738995338.6010402607048790096. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015750976s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-997276
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-997276
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81
	                    minikube.k8s.io/name=default-k8s-diff-port-997276
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_06T19_55_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 06 Oct 2025 19:55:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-997276
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 06 Oct 2025 19:56:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 06 Oct 2025 19:56:45 +0000   Mon, 06 Oct 2025 19:55:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 06 Oct 2025 19:56:45 +0000   Mon, 06 Oct 2025 19:55:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 06 Oct 2025 19:56:45 +0000   Mon, 06 Oct 2025 19:55:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 06 Oct 2025 19:56:45 +0000   Mon, 06 Oct 2025 19:56:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-997276
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb8e7845d757457a9e2e4b7bbac33e65
	  System UUID:                4764672c-0e9d-4c30-bf0e-576675527b0d
	  Boot ID:                    28123a58-a718-41b5-bac7-da83ff2d3a0d
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-bns67                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-default-k8s-diff-port-997276                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-twtwt                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-default-k8s-diff-port-997276             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-997276    200m (10%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-zl7gg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-default-k8s-diff-port-997276             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 53s                kube-proxy       
	  Warning  CgroupV1                 69s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  69s (x8 over 69s)  kubelet          Node default-k8s-diff-port-997276 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    69s (x8 over 69s)  kubelet          Node default-k8s-diff-port-997276 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     69s (x8 over 69s)  kubelet          Node default-k8s-diff-port-997276 status is now: NodeHasSufficientPID
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node default-k8s-diff-port-997276 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node default-k8s-diff-port-997276 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node default-k8s-diff-port-997276 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node default-k8s-diff-port-997276 event: Registered Node default-k8s-diff-port-997276 in Controller
	  Normal   NodeReady                14s                kubelet          Node default-k8s-diff-port-997276 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 6 19:26] overlayfs: idmapped layers are currently not supported
	[ +26.009516] overlayfs: idmapped layers are currently not supported
	[  +1.884868] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:28] overlayfs: idmapped layers are currently not supported
	[ +38.864395] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:29] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:30] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:31] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:35] overlayfs: idmapped layers are currently not supported
	[ +30.685217] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:37] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:39] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:41] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:48] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:49] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:50] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:52] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:54] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:55] overlayfs: idmapped layers are currently not supported
	[ +29.761517] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:56] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b12e88a3c975ccc7e4af9fecc0649289fa8cee32646d856c00c54d20e8f561d2] <==
	{"level":"warn","ts":"2025-10-06T19:55:53.626603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:53.636030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:53.650962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:53.671002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:53.699291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:53.717170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:53.730378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:53.751960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:53.767614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:53.811689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:53.865581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:53.880071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:53.924108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:53.949485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:53.981703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:54.021106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:54.072572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:54.103873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:54.151300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:54.163821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:54.226917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:54.257029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:54.277499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:54.310157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:55:54.383361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52340","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:56:59 up  1:39,  0 user,  load average: 4.01, 2.95, 2.17
	Linux default-k8s-diff-port-997276 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a1f77b9746fcd9ce095a37901dc254b0e1538045314c5e7488304cb68e64c398] <==
	I1006 19:56:04.948425       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1006 19:56:04.948674       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1006 19:56:04.948804       1 main.go:148] setting mtu 1500 for CNI 
	I1006 19:56:04.948816       1 main.go:178] kindnetd IP family: "ipv4"
	I1006 19:56:04.948826       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-06T19:56:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1006 19:56:05.151126       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1006 19:56:05.151145       1 controller.go:381] "Waiting for informer caches to sync"
	I1006 19:56:05.151154       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1006 19:56:05.151529       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1006 19:56:35.151218       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1006 19:56:35.151218       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1006 19:56:35.151439       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1006 19:56:35.152445       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1006 19:56:36.351528       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1006 19:56:36.351557       1 metrics.go:72] Registering metrics
	I1006 19:56:36.351612       1 controller.go:711] "Syncing nftables rules"
	I1006 19:56:45.155853       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1006 19:56:45.155924       1 main.go:301] handling current node
	I1006 19:56:55.151781       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1006 19:56:55.151889       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bba521e87edecfd6ae60801c1249f31f263de8c46657550b3a19173d6e5650b8] <==
	I1006 19:55:55.621358       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1006 19:55:55.621964       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1006 19:55:55.658928       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1006 19:55:55.659008       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1006 19:55:55.671386       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1006 19:55:55.671498       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1006 19:55:55.799951       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1006 19:55:56.218932       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1006 19:55:56.226375       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1006 19:55:56.227150       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1006 19:55:57.259070       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1006 19:55:57.319678       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1006 19:55:57.442269       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1006 19:55:57.456784       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1006 19:55:57.457921       1 controller.go:667] quota admission added evaluator for: endpoints
	I1006 19:55:57.479349       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1006 19:55:58.397079       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1006 19:55:58.417863       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1006 19:55:58.440679       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1006 19:55:58.482065       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1006 19:56:04.209000       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1006 19:56:04.403604       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1006 19:56:04.502792       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1006 19:56:04.511154       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1006 19:56:56.968073       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:56390: use of closed network connection
	
	
	==> kube-controller-manager [381529995c2b7cd58a0c44664154c450a24ae9fca6650e00d48f72400650d772] <==
	I1006 19:56:03.416512       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1006 19:56:03.416713       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1006 19:56:03.417146       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1006 19:56:03.417158       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1006 19:56:03.421266       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1006 19:56:03.423525       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 19:56:03.429180       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-997276" podCIDRs=["10.244.0.0/24"]
	I1006 19:56:03.431778       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1006 19:56:03.444057       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1006 19:56:03.444276       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1006 19:56:03.444348       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1006 19:56:03.444400       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1006 19:56:03.444948       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1006 19:56:03.445428       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1006 19:56:03.445484       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1006 19:56:03.445494       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1006 19:56:03.446524       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1006 19:56:03.446608       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1006 19:56:03.447663       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1006 19:56:03.447774       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1006 19:56:03.455035       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1006 19:56:03.457227       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 19:56:03.457244       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1006 19:56:03.457252       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1006 19:56:48.612642       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [f4844f7fc2d4f512125dc9efd40cb83b8cee8ff9fe433bcb7dc3d2282379c947] <==
	I1006 19:56:05.086051       1 server_linux.go:53] "Using iptables proxy"
	I1006 19:56:05.240924       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1006 19:56:05.347611       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1006 19:56:05.347668       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1006 19:56:05.347895       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1006 19:56:05.489512       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 19:56:05.489693       1 server_linux.go:132] "Using iptables Proxier"
	I1006 19:56:05.501444       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1006 19:56:05.504624       1 server.go:527] "Version info" version="v1.34.1"
	I1006 19:56:05.504665       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:56:05.509807       1 config.go:200] "Starting service config controller"
	I1006 19:56:05.509961       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1006 19:56:05.509987       1 config.go:106] "Starting endpoint slice config controller"
	I1006 19:56:05.509992       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1006 19:56:05.510010       1 config.go:403] "Starting serviceCIDR config controller"
	I1006 19:56:05.510014       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1006 19:56:05.510863       1 config.go:309] "Starting node config controller"
	I1006 19:56:05.510873       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1006 19:56:05.510886       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1006 19:56:05.611860       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1006 19:56:05.611927       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1006 19:56:05.611981       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d7bb5c9806ea21aa521290c98ac33a2de4cd55c2f9c3dab52cafc53e797945d2] <==
	I1006 19:55:56.226670       1 serving.go:386] Generated self-signed cert in-memory
	I1006 19:55:57.444449       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1006 19:55:57.444578       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:55:57.456735       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1006 19:55:57.456973       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:55:57.459790       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:55:57.456990       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 19:55:57.463736       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 19:55:57.457003       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1006 19:55:57.456928       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1006 19:55:57.464652       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1006 19:55:57.560426       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:55:57.564606       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 19:55:57.564771       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 06 19:56:01 default-k8s-diff-port-997276 kubelet[1302]: I1006 19:56:01.445947    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-997276" podStartSLOduration=3.4459265390000002 podStartE2EDuration="3.445926539s" podCreationTimestamp="2025-10-06 19:55:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 19:55:59.561167115 +0000 UTC m=+1.382649894" watchObservedRunningTime="2025-10-06 19:56:01.445926539 +0000 UTC m=+3.267409310"
	Oct 06 19:56:03 default-k8s-diff-port-997276 kubelet[1302]: I1006 19:56:03.462164    1302 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 06 19:56:03 default-k8s-diff-port-997276 kubelet[1302]: I1006 19:56:03.463301    1302 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 06 19:56:04 default-k8s-diff-port-997276 kubelet[1302]: I1006 19:56:04.286335    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05397544-ebaf-4f98-8762-9ede9c706bc9-lib-modules\") pod \"kube-proxy-zl7gg\" (UID: \"05397544-ebaf-4f98-8762-9ede9c706bc9\") " pod="kube-system/kube-proxy-zl7gg"
	Oct 06 19:56:04 default-k8s-diff-port-997276 kubelet[1302]: I1006 19:56:04.286451    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05397544-ebaf-4f98-8762-9ede9c706bc9-xtables-lock\") pod \"kube-proxy-zl7gg\" (UID: \"05397544-ebaf-4f98-8762-9ede9c706bc9\") " pod="kube-system/kube-proxy-zl7gg"
	Oct 06 19:56:04 default-k8s-diff-port-997276 kubelet[1302]: I1006 19:56:04.286474    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/05397544-ebaf-4f98-8762-9ede9c706bc9-kube-proxy\") pod \"kube-proxy-zl7gg\" (UID: \"05397544-ebaf-4f98-8762-9ede9c706bc9\") " pod="kube-system/kube-proxy-zl7gg"
	Oct 06 19:56:04 default-k8s-diff-port-997276 kubelet[1302]: I1006 19:56:04.286491    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7llsm\" (UniqueName: \"kubernetes.io/projected/05397544-ebaf-4f98-8762-9ede9c706bc9-kube-api-access-7llsm\") pod \"kube-proxy-zl7gg\" (UID: \"05397544-ebaf-4f98-8762-9ede9c706bc9\") " pod="kube-system/kube-proxy-zl7gg"
	Oct 06 19:56:04 default-k8s-diff-port-997276 kubelet[1302]: I1006 19:56:04.286554    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e281e8b3-9cc4-41fb-8e22-a66ef4e23a38-cni-cfg\") pod \"kindnet-twtwt\" (UID: \"e281e8b3-9cc4-41fb-8e22-a66ef4e23a38\") " pod="kube-system/kindnet-twtwt"
	Oct 06 19:56:04 default-k8s-diff-port-997276 kubelet[1302]: I1006 19:56:04.286575    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e281e8b3-9cc4-41fb-8e22-a66ef4e23a38-xtables-lock\") pod \"kindnet-twtwt\" (UID: \"e281e8b3-9cc4-41fb-8e22-a66ef4e23a38\") " pod="kube-system/kindnet-twtwt"
	Oct 06 19:56:04 default-k8s-diff-port-997276 kubelet[1302]: I1006 19:56:04.286628    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e281e8b3-9cc4-41fb-8e22-a66ef4e23a38-lib-modules\") pod \"kindnet-twtwt\" (UID: \"e281e8b3-9cc4-41fb-8e22-a66ef4e23a38\") " pod="kube-system/kindnet-twtwt"
	Oct 06 19:56:04 default-k8s-diff-port-997276 kubelet[1302]: I1006 19:56:04.392210    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8dqp\" (UniqueName: \"kubernetes.io/projected/e281e8b3-9cc4-41fb-8e22-a66ef4e23a38-kube-api-access-x8dqp\") pod \"kindnet-twtwt\" (UID: \"e281e8b3-9cc4-41fb-8e22-a66ef4e23a38\") " pod="kube-system/kindnet-twtwt"
	Oct 06 19:56:04 default-k8s-diff-port-997276 kubelet[1302]: I1006 19:56:04.413572    1302 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 06 19:56:05 default-k8s-diff-port-997276 kubelet[1302]: I1006 19:56:05.467407    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zl7gg" podStartSLOduration=1.467387508 podStartE2EDuration="1.467387508s" podCreationTimestamp="2025-10-06 19:56:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 19:56:05.423364948 +0000 UTC m=+7.244847735" watchObservedRunningTime="2025-10-06 19:56:05.467387508 +0000 UTC m=+7.288870271"
	Oct 06 19:56:08 default-k8s-diff-port-997276 kubelet[1302]: I1006 19:56:08.219279    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-twtwt" podStartSLOduration=4.219260803 podStartE2EDuration="4.219260803s" podCreationTimestamp="2025-10-06 19:56:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 19:56:05.469443816 +0000 UTC m=+7.290926604" watchObservedRunningTime="2025-10-06 19:56:08.219260803 +0000 UTC m=+10.040743574"
	Oct 06 19:56:45 default-k8s-diff-port-997276 kubelet[1302]: I1006 19:56:45.627940    1302 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 06 19:56:45 default-k8s-diff-port-997276 kubelet[1302]: I1006 19:56:45.724164    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3cd050f9-3953-4804-bbda-79ae9e50cf67-tmp\") pod \"storage-provisioner\" (UID: \"3cd050f9-3953-4804-bbda-79ae9e50cf67\") " pod="kube-system/storage-provisioner"
	Oct 06 19:56:45 default-k8s-diff-port-997276 kubelet[1302]: I1006 19:56:45.724366    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2vdr\" (UniqueName: \"kubernetes.io/projected/3cd050f9-3953-4804-bbda-79ae9e50cf67-kube-api-access-f2vdr\") pod \"storage-provisioner\" (UID: \"3cd050f9-3953-4804-bbda-79ae9e50cf67\") " pod="kube-system/storage-provisioner"
	Oct 06 19:56:45 default-k8s-diff-port-997276 kubelet[1302]: I1006 19:56:45.724452    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89f11c5e-2682-4227-80ba-2fe8b97c1629-config-volume\") pod \"coredns-66bc5c9577-bns67\" (UID: \"89f11c5e-2682-4227-80ba-2fe8b97c1629\") " pod="kube-system/coredns-66bc5c9577-bns67"
	Oct 06 19:56:45 default-k8s-diff-port-997276 kubelet[1302]: I1006 19:56:45.724532    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbfrx\" (UniqueName: \"kubernetes.io/projected/89f11c5e-2682-4227-80ba-2fe8b97c1629-kube-api-access-tbfrx\") pod \"coredns-66bc5c9577-bns67\" (UID: \"89f11c5e-2682-4227-80ba-2fe8b97c1629\") " pod="kube-system/coredns-66bc5c9577-bns67"
	Oct 06 19:56:45 default-k8s-diff-port-997276 kubelet[1302]: W1006 19:56:45.994569    1302 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4fc3831db94848cd06748cc5e8c8533f2de0215c11120be2aa7b676a4a1c113b/crio-fca7b2295f17dfb9f3b1a0641af0e4b9f42981d310087680387a3643054b63e2 WatchSource:0}: Error finding container fca7b2295f17dfb9f3b1a0641af0e4b9f42981d310087680387a3643054b63e2: Status 404 returned error can't find the container with id fca7b2295f17dfb9f3b1a0641af0e4b9f42981d310087680387a3643054b63e2
	Oct 06 19:56:46 default-k8s-diff-port-997276 kubelet[1302]: W1006 19:56:46.078330    1302 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4fc3831db94848cd06748cc5e8c8533f2de0215c11120be2aa7b676a4a1c113b/crio-a00225e0e16ef1b7a3b3c0feb7fc09b9ea0b37e6279332b30654c716b3e4725f WatchSource:0}: Error finding container a00225e0e16ef1b7a3b3c0feb7fc09b9ea0b37e6279332b30654c716b3e4725f: Status 404 returned error can't find the container with id a00225e0e16ef1b7a3b3c0feb7fc09b9ea0b37e6279332b30654c716b3e4725f
	Oct 06 19:56:46 default-k8s-diff-port-997276 kubelet[1302]: I1006 19:56:46.536674    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-bns67" podStartSLOduration=42.536654411 podStartE2EDuration="42.536654411s" podCreationTimestamp="2025-10-06 19:56:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 19:56:46.515788371 +0000 UTC m=+48.337271133" watchObservedRunningTime="2025-10-06 19:56:46.536654411 +0000 UTC m=+48.358137182"
	Oct 06 19:56:46 default-k8s-diff-port-997276 kubelet[1302]: I1006 19:56:46.555022    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=40.555004459 podStartE2EDuration="40.555004459s" podCreationTimestamp="2025-10-06 19:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 19:56:46.537465815 +0000 UTC m=+48.358948594" watchObservedRunningTime="2025-10-06 19:56:46.555004459 +0000 UTC m=+48.376487221"
	Oct 06 19:56:48 default-k8s-diff-port-997276 kubelet[1302]: I1006 19:56:48.750977    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzrv4\" (UniqueName: \"kubernetes.io/projected/53963bee-94d1-4ca1-8020-154e6f994193-kube-api-access-mzrv4\") pod \"busybox\" (UID: \"53963bee-94d1-4ca1-8020-154e6f994193\") " pod="default/busybox"
	Oct 06 19:56:49 default-k8s-diff-port-997276 kubelet[1302]: W1006 19:56:49.084484    1302 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4fc3831db94848cd06748cc5e8c8533f2de0215c11120be2aa7b676a4a1c113b/crio-a78605bc1fe4e819f5d5f5ccc7d5a1264fc57b2dcf150a812b2a104b192385a1 WatchSource:0}: Error finding container a78605bc1fe4e819f5d5f5ccc7d5a1264fc57b2dcf150a812b2a104b192385a1: Status 404 returned error can't find the container with id a78605bc1fe4e819f5d5f5ccc7d5a1264fc57b2dcf150a812b2a104b192385a1
	
	
	==> storage-provisioner [002ec5f47198fd5c8977f3580f9050957679ac52c387fa4e774fb32407c3166a] <==
	I1006 19:56:46.165491       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1006 19:56:46.207896       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1006 19:56:46.208025       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1006 19:56:46.210835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:46.230223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1006 19:56:46.230651       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1006 19:56:46.230893       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-997276_f288d637-8ac7-448c-bd65-a52458db8723!
	I1006 19:56:46.245937       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6aac352d-9443-44be-81f2-135d3c658690", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-997276_f288d637-8ac7-448c-bd65-a52458db8723 became leader
	W1006 19:56:46.247243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:46.254428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1006 19:56:46.335990       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-997276_f288d637-8ac7-448c-bd65-a52458db8723!
	W1006 19:56:48.258093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:48.264281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:50.268392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:50.278339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:52.282401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:52.287759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:54.290618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:54.297758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:56.301163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:56.307623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:58.312124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:56:58.333944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-997276 -n default-k8s-diff-port-997276
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-997276 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-988436 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-988436 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (282.859443ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:57:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-988436 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-988436
helpers_test.go:243: (dbg) docker inspect newest-cni-988436:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9b780de2752c967e18231ad401a9aab982a50b731cd608d9fd3351a30367d6fd",
	        "Created": "2025-10-06T19:56:32.85241989Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 210441,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T19:56:32.896661488Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/9b780de2752c967e18231ad401a9aab982a50b731cd608d9fd3351a30367d6fd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9b780de2752c967e18231ad401a9aab982a50b731cd608d9fd3351a30367d6fd/hostname",
	        "HostsPath": "/var/lib/docker/containers/9b780de2752c967e18231ad401a9aab982a50b731cd608d9fd3351a30367d6fd/hosts",
	        "LogPath": "/var/lib/docker/containers/9b780de2752c967e18231ad401a9aab982a50b731cd608d9fd3351a30367d6fd/9b780de2752c967e18231ad401a9aab982a50b731cd608d9fd3351a30367d6fd-json.log",
	        "Name": "/newest-cni-988436",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-988436:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-988436",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9b780de2752c967e18231ad401a9aab982a50b731cd608d9fd3351a30367d6fd",
	                "LowerDir": "/var/lib/docker/overlay2/5c259bf9ca1cd1f255780dfe0febf52c4b8906ab233552c3762d9ba362419316-init/diff:/var/lib/docker/overlay2/950dad8a7486c25883a9845454dded3949e8f3a53166d005e6749ff210bcab80/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5c259bf9ca1cd1f255780dfe0febf52c4b8906ab233552c3762d9ba362419316/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5c259bf9ca1cd1f255780dfe0febf52c4b8906ab233552c3762d9ba362419316/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5c259bf9ca1cd1f255780dfe0febf52c4b8906ab233552c3762d9ba362419316/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-988436",
	                "Source": "/var/lib/docker/volumes/newest-cni-988436/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-988436",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-988436",
	                "name.minikube.sigs.k8s.io": "newest-cni-988436",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3a30c56bb2da83afe45b1edd952061d6e21923fe2b90f496e14451dd429fd4eb",
	            "SandboxKey": "/var/run/docker/netns/3a30c56bb2da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-988436": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:ad:ad:eb:d0:fe",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a65d7e56c8a8636a38fe861ce7ce76450c77f0c819639a82d76a33b2e2e5cd5c",
	                    "EndpointID": "9996edd26d5ab6c339e7b5c89aa940dcf68b7ea6182dcf40076e60fde2fc92e5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-988436",
	                        "9b780de2752c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-988436 -n newest-cni-988436
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-988436 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-988436 logs -n 25: (1.061669918s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-100545                                                                                                                                                                                                                     │ old-k8s-version-100545       │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:52 UTC │
	│ start   │ -p no-preload-314275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:52 UTC │ 06 Oct 25 19:53 UTC │
	│ delete  │ -p cert-expiration-585086                                                                                                                                                                                                                     │ cert-expiration-585086       │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │ 06 Oct 25 19:53 UTC │
	│ start   │ -p embed-certs-830393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │ 06 Oct 25 19:54 UTC │
	│ addons  │ enable metrics-server -p no-preload-314275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │                     │
	│ stop    │ -p no-preload-314275 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:53 UTC │ 06 Oct 25 19:54 UTC │
	│ addons  │ enable dashboard -p no-preload-314275 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:54 UTC │ 06 Oct 25 19:54 UTC │
	│ start   │ -p no-preload-314275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:54 UTC │ 06 Oct 25 19:55 UTC │
	│ addons  │ enable metrics-server -p embed-certs-830393 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:54 UTC │                     │
	│ stop    │ -p embed-certs-830393 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ addons  │ enable dashboard -p embed-certs-830393 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ pause   │ -p no-preload-314275 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │                     │
	│ start   │ -p embed-certs-830393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:56 UTC │
	│ delete  │ -p no-preload-314275                                                                                                                                                                                                                          │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ delete  │ -p no-preload-314275                                                                                                                                                                                                                          │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ delete  │ -p disable-driver-mounts-932453                                                                                                                                                                                                               │ disable-driver-mounts-932453 │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ start   │ -p default-k8s-diff-port-997276 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:56 UTC │
	│ image   │ embed-certs-830393 image list --format=json                                                                                                                                                                                                   │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │ 06 Oct 25 19:56 UTC │
	│ pause   │ -p embed-certs-830393 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │                     │
	│ delete  │ -p embed-certs-830393                                                                                                                                                                                                                         │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │ 06 Oct 25 19:56 UTC │
	│ delete  │ -p embed-certs-830393                                                                                                                                                                                                                         │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │ 06 Oct 25 19:56 UTC │
	│ start   │ -p newest-cni-988436 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │ 06 Oct 25 19:57 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-997276 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-997276 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-988436 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 19:56:27
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 19:56:27.357993  210052 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:56:27.358117  210052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:56:27.358176  210052 out.go:374] Setting ErrFile to fd 2...
	I1006 19:56:27.358181  210052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:56:27.358452  210052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:56:27.358877  210052 out.go:368] Setting JSON to false
	I1006 19:56:27.359821  210052 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5923,"bootTime":1759774665,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1006 19:56:27.359887  210052 start.go:140] virtualization:  
	I1006 19:56:27.363905  210052 out.go:179] * [newest-cni-988436] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 19:56:27.367891  210052 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 19:56:27.368035  210052 notify.go:220] Checking for updates...
	I1006 19:56:27.373770  210052 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 19:56:27.376837  210052 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:56:27.379825  210052 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	I1006 19:56:27.383158  210052 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 19:56:27.386221  210052 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 19:56:27.390345  210052 config.go:182] Loaded profile config "default-k8s-diff-port-997276": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:56:27.390460  210052 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 19:56:27.417252  210052 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 19:56:27.417372  210052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:56:27.489240  210052 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:56:27.479471565 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:56:27.489350  210052 docker.go:318] overlay module found
	I1006 19:56:27.492599  210052 out.go:179] * Using the docker driver based on user configuration
	I1006 19:56:27.495496  210052 start.go:304] selected driver: docker
	I1006 19:56:27.495517  210052 start.go:924] validating driver "docker" against <nil>
	I1006 19:56:27.495532  210052 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 19:56:27.496473  210052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:56:27.555294  210052 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:56:27.545401549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:56:27.555465  210052 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1006 19:56:27.555502  210052 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1006 19:56:27.555808  210052 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1006 19:56:27.558387  210052 out.go:179] * Using Docker driver with root privileges
	I1006 19:56:27.561205  210052 cni.go:84] Creating CNI manager for ""
	I1006 19:56:27.561271  210052 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:56:27.561286  210052 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 19:56:27.561359  210052 start.go:348] cluster config:
	{Name:newest-cni-988436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-988436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:56:27.566436  210052 out.go:179] * Starting "newest-cni-988436" primary control-plane node in "newest-cni-988436" cluster
	I1006 19:56:27.569272  210052 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 19:56:27.572353  210052 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 19:56:27.575074  210052 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:56:27.575132  210052 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1006 19:56:27.575156  210052 cache.go:58] Caching tarball of preloaded images
	I1006 19:56:27.575178  210052 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 19:56:27.575243  210052 preload.go:233] Found /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1006 19:56:27.575259  210052 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 19:56:27.575366  210052 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/config.json ...
	I1006 19:56:27.575382  210052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/config.json: {Name:mkf9eff8c85abad9a584a9ab3fd004384c67d223 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:56:27.598586  210052 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 19:56:27.598610  210052 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 19:56:27.598630  210052 cache.go:232] Successfully downloaded all kic artifacts
	I1006 19:56:27.598653  210052 start.go:360] acquireMachinesLock for newest-cni-988436: {Name:mk73775a9b90360fc78b4ca045cf6f7e4dbc2ba0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 19:56:27.598754  210052 start.go:364] duration metric: took 79.723µs to acquireMachinesLock for "newest-cni-988436"
	I1006 19:56:27.598784  210052 start.go:93] Provisioning new machine with config: &{Name:newest-cni-988436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-988436 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 19:56:27.598852  210052 start.go:125] createHost starting for "" (driver="docker")
	W1006 19:56:24.581465  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	W1006 19:56:26.589768  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	I1006 19:56:27.602304  210052 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1006 19:56:27.602540  210052 start.go:159] libmachine.API.Create for "newest-cni-988436" (driver="docker")
	I1006 19:56:27.602587  210052 client.go:168] LocalClient.Create starting
	I1006 19:56:27.602671  210052 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem
	I1006 19:56:27.602710  210052 main.go:141] libmachine: Decoding PEM data...
	I1006 19:56:27.602733  210052 main.go:141] libmachine: Parsing certificate...
	I1006 19:56:27.602785  210052 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem
	I1006 19:56:27.602813  210052 main.go:141] libmachine: Decoding PEM data...
	I1006 19:56:27.602834  210052 main.go:141] libmachine: Parsing certificate...
	I1006 19:56:27.603211  210052 cli_runner.go:164] Run: docker network inspect newest-cni-988436 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 19:56:27.619227  210052 cli_runner.go:211] docker network inspect newest-cni-988436 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 19:56:27.619302  210052 network_create.go:284] running [docker network inspect newest-cni-988436] to gather additional debugging logs...
	I1006 19:56:27.619318  210052 cli_runner.go:164] Run: docker network inspect newest-cni-988436
	W1006 19:56:27.634402  210052 cli_runner.go:211] docker network inspect newest-cni-988436 returned with exit code 1
	I1006 19:56:27.634430  210052 network_create.go:287] error running [docker network inspect newest-cni-988436]: docker network inspect newest-cni-988436: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-988436 not found
	I1006 19:56:27.634443  210052 network_create.go:289] output of [docker network inspect newest-cni-988436]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-988436 not found
	
	** /stderr **
	I1006 19:56:27.634577  210052 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:56:27.652316  210052 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7058eae896da IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:57:c0:bd:de:1b} reservation:<nil>}
	I1006 19:56:27.652636  210052 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e321ce8cf6dc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:12:e6:76:87:fe:bb} reservation:<nil>}
	I1006 19:56:27.652990  210052 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-17bdbd7bd7be IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:22:93:f3:19:55:10} reservation:<nil>}
	I1006 19:56:27.653260  210052 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-2e10a72004c0 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:16:2c:92:d4:96:5e} reservation:<nil>}
	I1006 19:56:27.653739  210052 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a4f4b0}
	I1006 19:56:27.653765  210052 network_create.go:124] attempt to create docker network newest-cni-988436 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1006 19:56:27.653837  210052 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-988436 newest-cni-988436
	I1006 19:56:27.727009  210052 network_create.go:108] docker network newest-cni-988436 192.168.85.0/24 created
	I1006 19:56:27.727040  210052 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-988436" container
	I1006 19:56:27.727115  210052 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 19:56:27.744832  210052 cli_runner.go:164] Run: docker volume create newest-cni-988436 --label name.minikube.sigs.k8s.io=newest-cni-988436 --label created_by.minikube.sigs.k8s.io=true
	I1006 19:56:27.764605  210052 oci.go:103] Successfully created a docker volume newest-cni-988436
	I1006 19:56:27.764702  210052 cli_runner.go:164] Run: docker run --rm --name newest-cni-988436-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-988436 --entrypoint /usr/bin/test -v newest-cni-988436:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 19:56:28.312473  210052 oci.go:107] Successfully prepared a docker volume newest-cni-988436
	I1006 19:56:28.312516  210052 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:56:28.312535  210052 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 19:56:28.312602  210052 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-988436:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	W1006 19:56:29.081881  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	W1006 19:56:31.082305  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	W1006 19:56:33.580892  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	I1006 19:56:32.785958  210052 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-988436:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.473321259s)
	I1006 19:56:32.785991  210052 kic.go:203] duration metric: took 4.473452732s to extract preloaded images to volume ...
	W1006 19:56:32.786143  210052 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1006 19:56:32.786248  210052 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 19:56:32.838133  210052 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-988436 --name newest-cni-988436 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-988436 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-988436 --network newest-cni-988436 --ip 192.168.85.2 --volume newest-cni-988436:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 19:56:33.126638  210052 cli_runner.go:164] Run: docker container inspect newest-cni-988436 --format={{.State.Running}}
	I1006 19:56:33.162178  210052 cli_runner.go:164] Run: docker container inspect newest-cni-988436 --format={{.State.Status}}
	I1006 19:56:33.180774  210052 cli_runner.go:164] Run: docker exec newest-cni-988436 stat /var/lib/dpkg/alternatives/iptables
	I1006 19:56:33.225510  210052 oci.go:144] the created container "newest-cni-988436" has a running status.
	I1006 19:56:33.225548  210052 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa...
	I1006 19:56:33.431069  210052 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 19:56:33.466652  210052 cli_runner.go:164] Run: docker container inspect newest-cni-988436 --format={{.State.Status}}
	I1006 19:56:33.499299  210052 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 19:56:33.499320  210052 kic_runner.go:114] Args: [docker exec --privileged newest-cni-988436 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 19:56:33.598929  210052 cli_runner.go:164] Run: docker container inspect newest-cni-988436 --format={{.State.Status}}
	I1006 19:56:33.618475  210052 machine.go:93] provisionDockerMachine start ...
	I1006 19:56:33.618580  210052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:56:33.637976  210052 main.go:141] libmachine: Using SSH client type: native
	I1006 19:56:33.638310  210052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1006 19:56:33.638320  210052 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 19:56:33.638991  210052 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1006 19:56:36.775541  210052 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-988436
	
	I1006 19:56:36.775565  210052 ubuntu.go:182] provisioning hostname "newest-cni-988436"
	I1006 19:56:36.775630  210052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:56:36.793876  210052 main.go:141] libmachine: Using SSH client type: native
	I1006 19:56:36.794186  210052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1006 19:56:36.794204  210052 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-988436 && echo "newest-cni-988436" | sudo tee /etc/hostname
	I1006 19:56:36.938442  210052 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-988436
	
	I1006 19:56:36.938536  210052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:56:36.957972  210052 main.go:141] libmachine: Using SSH client type: native
	I1006 19:56:36.958283  210052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1006 19:56:36.958303  210052 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-988436' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-988436/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-988436' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 19:56:37.100192  210052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 19:56:37.100237  210052 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-2540/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-2540/.minikube}
	I1006 19:56:37.100265  210052 ubuntu.go:190] setting up certificates
	I1006 19:56:37.100274  210052 provision.go:84] configureAuth start
	I1006 19:56:37.100346  210052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-988436
	I1006 19:56:37.119295  210052 provision.go:143] copyHostCerts
	I1006 19:56:37.119371  210052 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem, removing ...
	I1006 19:56:37.119386  210052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem
	I1006 19:56:37.119468  210052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem (1082 bytes)
	I1006 19:56:37.119564  210052 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem, removing ...
	I1006 19:56:37.119574  210052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem
	I1006 19:56:37.119601  210052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem (1123 bytes)
	I1006 19:56:37.119659  210052 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem, removing ...
	I1006 19:56:37.119668  210052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem
	I1006 19:56:37.119693  210052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem (1675 bytes)
	I1006 19:56:37.119772  210052 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem org=jenkins.newest-cni-988436 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-988436]
	W1006 19:56:36.080827  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	W1006 19:56:38.082444  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	I1006 19:56:37.426302  210052 provision.go:177] copyRemoteCerts
	I1006 19:56:37.426366  210052 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 19:56:37.426403  210052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:56:37.457518  210052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa Username:docker}
	I1006 19:56:37.559808  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 19:56:37.581588  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1006 19:56:37.599859  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 19:56:37.617035  210052 provision.go:87] duration metric: took 516.722506ms to configureAuth
	I1006 19:56:37.617133  210052 ubuntu.go:206] setting minikube options for container-runtime
	I1006 19:56:37.617342  210052 config.go:182] Loaded profile config "newest-cni-988436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:56:37.617474  210052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:56:37.637667  210052 main.go:141] libmachine: Using SSH client type: native
	I1006 19:56:37.637970  210052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1006 19:56:37.637985  210052 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 19:56:37.910498  210052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 19:56:37.910525  210052 machine.go:96] duration metric: took 4.292030507s to provisionDockerMachine
	I1006 19:56:37.910535  210052 client.go:171] duration metric: took 10.307937103s to LocalClient.Create
	I1006 19:56:37.910545  210052 start.go:167] duration metric: took 10.308006348s to libmachine.API.Create "newest-cni-988436"
	I1006 19:56:37.910552  210052 start.go:293] postStartSetup for "newest-cni-988436" (driver="docker")
	I1006 19:56:37.910562  210052 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 19:56:37.910633  210052 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 19:56:37.910678  210052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:56:37.928551  210052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa Username:docker}
	I1006 19:56:38.033048  210052 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 19:56:38.037223  210052 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 19:56:38.037253  210052 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 19:56:38.037267  210052 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/addons for local assets ...
	I1006 19:56:38.037332  210052 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/files for local assets ...
	I1006 19:56:38.037421  210052 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> 43502.pem in /etc/ssl/certs
	I1006 19:56:38.037525  210052 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 19:56:38.046942  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:56:38.068019  210052 start.go:296] duration metric: took 157.450178ms for postStartSetup
	I1006 19:56:38.068422  210052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-988436
	I1006 19:56:38.092234  210052 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/config.json ...
	I1006 19:56:38.092529  210052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 19:56:38.092583  210052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:56:38.109788  210052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa Username:docker}
	I1006 19:56:38.205124  210052 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 19:56:38.209969  210052 start.go:128] duration metric: took 10.611099752s to createHost
	I1006 19:56:38.209995  210052 start.go:83] releasing machines lock for "newest-cni-988436", held for 10.611228173s
	I1006 19:56:38.210094  210052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-988436
	I1006 19:56:38.227465  210052 ssh_runner.go:195] Run: cat /version.json
	I1006 19:56:38.227522  210052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:56:38.227785  210052 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 19:56:38.227850  210052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:56:38.247799  210052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa Username:docker}
	I1006 19:56:38.256254  210052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa Username:docker}
	I1006 19:56:38.359242  210052 ssh_runner.go:195] Run: systemctl --version
	I1006 19:56:38.472099  210052 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 19:56:38.511599  210052 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 19:56:38.517885  210052 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 19:56:38.517960  210052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 19:56:38.555546  210052 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1006 19:56:38.555567  210052 start.go:495] detecting cgroup driver to use...
	I1006 19:56:38.555604  210052 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 19:56:38.555656  210052 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 19:56:38.575462  210052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 19:56:38.593386  210052 docker.go:218] disabling cri-docker service (if available) ...
	I1006 19:56:38.593486  210052 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 19:56:38.610517  210052 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 19:56:38.629288  210052 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 19:56:38.760103  210052 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 19:56:38.890811  210052 docker.go:234] disabling docker service ...
	I1006 19:56:38.890902  210052 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 19:56:38.918134  210052 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 19:56:38.931520  210052 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 19:56:39.053585  210052 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 19:56:39.181254  210052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 19:56:39.194464  210052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 19:56:39.209553  210052 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 19:56:39.209680  210052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:56:39.218804  210052 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 19:56:39.218876  210052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:56:39.228509  210052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:56:39.237269  210052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:56:39.245888  210052 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 19:56:39.254513  210052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:56:39.263406  210052 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:56:39.279311  210052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:56:39.288367  210052 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 19:56:39.296256  210052 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 19:56:39.303886  210052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:56:39.425912  210052 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 19:56:39.569993  210052 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 19:56:39.570070  210052 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 19:56:39.574526  210052 start.go:563] Will wait 60s for crictl version
	I1006 19:56:39.574592  210052 ssh_runner.go:195] Run: which crictl
	I1006 19:56:39.580184  210052 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 19:56:39.607742  210052 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 19:56:39.607886  210052 ssh_runner.go:195] Run: crio --version
	I1006 19:56:39.641389  210052 ssh_runner.go:195] Run: crio --version
	I1006 19:56:39.675076  210052 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 19:56:39.677929  210052 cli_runner.go:164] Run: docker network inspect newest-cni-988436 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:56:39.696516  210052 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1006 19:56:39.700340  210052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:56:39.713395  210052 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1006 19:56:39.716246  210052 kubeadm.go:883] updating cluster {Name:newest-cni-988436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-988436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 19:56:39.716412  210052 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:56:39.716509  210052 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:56:39.755113  210052 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:56:39.755135  210052 crio.go:433] Images already preloaded, skipping extraction
	I1006 19:56:39.755196  210052 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:56:39.787126  210052 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:56:39.787196  210052 cache_images.go:85] Images are preloaded, skipping loading
	I1006 19:56:39.787216  210052 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1006 19:56:39.787335  210052 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-988436 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-988436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 19:56:39.787460  210052 ssh_runner.go:195] Run: crio config
	I1006 19:56:39.863595  210052 cni.go:84] Creating CNI manager for ""
	I1006 19:56:39.863618  210052 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:56:39.863637  210052 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1006 19:56:39.863661  210052 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-988436 NodeName:newest-cni-988436 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 19:56:39.863818  210052 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-988436"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 19:56:39.863893  210052 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 19:56:39.872433  210052 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 19:56:39.872511  210052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 19:56:39.880354  210052 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1006 19:56:39.894827  210052 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 19:56:39.908285  210052 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1006 19:56:39.921983  210052 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1006 19:56:39.925561  210052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:56:39.936397  210052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:56:40.085605  210052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:56:40.106839  210052 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436 for IP: 192.168.85.2
	I1006 19:56:40.106915  210052 certs.go:195] generating shared ca certs ...
	I1006 19:56:40.106947  210052 certs.go:227] acquiring lock for ca certs: {Name:mke29bef1b13829052576090d66d8864f7cbc64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:56:40.107150  210052 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key
	I1006 19:56:40.107238  210052 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key
	I1006 19:56:40.107268  210052 certs.go:257] generating profile certs ...
	I1006 19:56:40.107358  210052 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/client.key
	I1006 19:56:40.107407  210052 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/client.crt with IP's: []
	I1006 19:56:40.665462  210052 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/client.crt ...
	I1006 19:56:40.665496  210052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/client.crt: {Name:mk82863bb472ecb3697b17bef486db7975bf391d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:56:40.665718  210052 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/client.key ...
	I1006 19:56:40.665731  210052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/client.key: {Name:mkdc8e2f4c6d136d3005485915301015a1703192 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:56:40.665828  210052 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.key.1ee6693d
	I1006 19:56:40.665846  210052 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.crt.1ee6693d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1006 19:56:40.921794  210052 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.crt.1ee6693d ...
	I1006 19:56:40.921825  210052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.crt.1ee6693d: {Name:mkc191190d9fb7bfe77b9015024577fb844fb7a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:56:40.922002  210052 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.key.1ee6693d ...
	I1006 19:56:40.922015  210052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.key.1ee6693d: {Name:mke12ff7d149e4967c0b82c9911c4e4a149defd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:56:40.922097  210052 certs.go:382] copying /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.crt.1ee6693d -> /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.crt
	I1006 19:56:40.922176  210052 certs.go:386] copying /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.key.1ee6693d -> /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.key
	I1006 19:56:40.922234  210052 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/proxy-client.key
	I1006 19:56:40.922253  210052 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/proxy-client.crt with IP's: []
	I1006 19:56:41.223580  210052 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/proxy-client.crt ...
	I1006 19:56:41.223606  210052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/proxy-client.crt: {Name:mka69f8bb4fd0f5d11ba2f11d7c7c8672761b518 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:56:41.223790  210052 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/proxy-client.key ...
	I1006 19:56:41.223809  210052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/proxy-client.key: {Name:mkc4b431cd69cb7113ec8abbadf217828f8fe347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:56:41.223992  210052 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem (1338 bytes)
	W1006 19:56:41.224034  210052 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350_empty.pem, impossibly tiny 0 bytes
	I1006 19:56:41.224048  210052 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 19:56:41.224072  210052 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem (1082 bytes)
	I1006 19:56:41.224100  210052 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem (1123 bytes)
	I1006 19:56:41.224125  210052 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem (1675 bytes)
	I1006 19:56:41.224181  210052 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:56:41.224776  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 19:56:41.243540  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 19:56:41.263322  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 19:56:41.282749  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 19:56:41.301198  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 19:56:41.320838  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 19:56:41.338332  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 19:56:41.357382  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 19:56:41.375828  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem --> /usr/share/ca-certificates/4350.pem (1338 bytes)
	I1006 19:56:41.399766  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /usr/share/ca-certificates/43502.pem (1708 bytes)
	I1006 19:56:41.419789  210052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 19:56:41.439316  210052 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 19:56:41.453030  210052 ssh_runner.go:195] Run: openssl version
	I1006 19:56:41.460587  210052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4350.pem && ln -fs /usr/share/ca-certificates/4350.pem /etc/ssl/certs/4350.pem"
	I1006 19:56:41.470142  210052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4350.pem
	I1006 19:56:41.473980  210052 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 18:49 /usr/share/ca-certificates/4350.pem
	I1006 19:56:41.474040  210052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4350.pem
	I1006 19:56:41.520150  210052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4350.pem /etc/ssl/certs/51391683.0"
	I1006 19:56:41.529157  210052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43502.pem && ln -fs /usr/share/ca-certificates/43502.pem /etc/ssl/certs/43502.pem"
	I1006 19:56:41.540617  210052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43502.pem
	I1006 19:56:41.547943  210052 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 18:49 /usr/share/ca-certificates/43502.pem
	I1006 19:56:41.548006  210052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43502.pem
	I1006 19:56:41.598583  210052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43502.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 19:56:41.607325  210052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 19:56:41.615729  210052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:56:41.619237  210052 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:56:41.619306  210052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:56:41.661642  210052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 19:56:41.670098  210052 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 19:56:41.673806  210052 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 19:56:41.673886  210052 kubeadm.go:400] StartCluster: {Name:newest-cni-988436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-988436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:56:41.673987  210052 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 19:56:41.674051  210052 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 19:56:41.710128  210052 cri.go:89] found id: ""
	I1006 19:56:41.710278  210052 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 19:56:41.718440  210052 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 19:56:41.729365  210052 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 19:56:41.729492  210052 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 19:56:41.737890  210052 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 19:56:41.737910  210052 kubeadm.go:157] found existing configuration files:
	
	I1006 19:56:41.737995  210052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 19:56:41.746054  210052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 19:56:41.746206  210052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 19:56:41.754054  210052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 19:56:41.762051  210052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 19:56:41.762169  210052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 19:56:41.769817  210052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 19:56:41.777853  210052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 19:56:41.777972  210052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 19:56:41.785311  210052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 19:56:41.793390  210052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 19:56:41.793463  210052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 19:56:41.801532  210052 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 19:56:41.846098  210052 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 19:56:41.846499  210052 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 19:56:41.874009  210052 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 19:56:41.874097  210052 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1006 19:56:41.874140  210052 kubeadm.go:318] OS: Linux
	I1006 19:56:41.874201  210052 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 19:56:41.874263  210052 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1006 19:56:41.874323  210052 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 19:56:41.874385  210052 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 19:56:41.874440  210052 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 19:56:41.874499  210052 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 19:56:41.874558  210052 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 19:56:41.874623  210052 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 19:56:41.874696  210052 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1006 19:56:41.957610  210052 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 19:56:41.957741  210052 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 19:56:41.957847  210052 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 19:56:41.965754  210052 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 19:56:41.972299  210052 out.go:252]   - Generating certificates and keys ...
	I1006 19:56:41.972404  210052 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 19:56:41.972476  210052 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 19:56:42.188994  210052 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	W1006 19:56:40.089093  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	W1006 19:56:42.580996  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	I1006 19:56:43.268934  210052 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 19:56:43.761318  210052 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 19:56:44.281673  210052 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 19:56:44.552377  210052 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 19:56:44.552956  210052 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-988436] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1006 19:56:44.927974  210052 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 19:56:44.928125  210052 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-988436] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1006 19:56:45.729706  210052 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 19:56:46.699422  210052 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 19:56:47.267978  210052 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 19:56:47.268279  210052 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	W1006 19:56:44.582051  205530 node_ready.go:57] node "default-k8s-diff-port-997276" has "Ready":"False" status (will retry)
	I1006 19:56:46.081891  205530 node_ready.go:49] node "default-k8s-diff-port-997276" is "Ready"
	I1006 19:56:46.081929  205530 node_ready.go:38] duration metric: took 40.004115031s for node "default-k8s-diff-port-997276" to be "Ready" ...
	I1006 19:56:46.081943  205530 api_server.go:52] waiting for apiserver process to appear ...
	I1006 19:56:46.082002  205530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 19:56:46.106981  205530 api_server.go:72] duration metric: took 41.394476453s to wait for apiserver process to appear ...
	I1006 19:56:46.107010  205530 api_server.go:88] waiting for apiserver healthz status ...
	I1006 19:56:46.107046  205530 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1006 19:56:46.116095  205530 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1006 19:56:46.117395  205530 api_server.go:141] control plane version: v1.34.1
	I1006 19:56:46.117424  205530 api_server.go:131] duration metric: took 10.406615ms to wait for apiserver health ...
	I1006 19:56:46.117433  205530 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 19:56:46.130036  205530 system_pods.go:59] 8 kube-system pods found
	I1006 19:56:46.130077  205530 system_pods.go:61] "coredns-66bc5c9577-bns67" [89f11c5e-2682-4227-80ba-2fe8b97c1629] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:56:46.130084  205530 system_pods.go:61] "etcd-default-k8s-diff-port-997276" [721c87e6-60f9-41e8-ac17-f6029a04e17e] Running
	I1006 19:56:46.130090  205530 system_pods.go:61] "kindnet-twtwt" [e281e8b3-9cc4-41fb-8e22-a66ef4e23a38] Running
	I1006 19:56:46.130099  205530 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-997276" [7d10547b-e1b4-4af4-8da7-f4d28894afd4] Running
	I1006 19:56:46.130104  205530 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-997276" [a477444a-eb40-45c8-8588-842a7de0164e] Running
	I1006 19:56:46.130108  205530 system_pods.go:61] "kube-proxy-zl7gg" [05397544-ebaf-4f98-8762-9ede9c706bc9] Running
	I1006 19:56:46.130113  205530 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-997276" [1b4b8d6e-f4d6-40f1-99eb-2d33fc3533a5] Running
	I1006 19:56:46.130123  205530 system_pods.go:61] "storage-provisioner" [3cd050f9-3953-4804-bbda-79ae9e50cf67] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 19:56:46.130130  205530 system_pods.go:74] duration metric: took 12.691146ms to wait for pod list to return data ...
	I1006 19:56:46.130144  205530 default_sa.go:34] waiting for default service account to be created ...
	I1006 19:56:46.136990  205530 default_sa.go:45] found service account: "default"
	I1006 19:56:46.137018  205530 default_sa.go:55] duration metric: took 6.867335ms for default service account to be created ...
	I1006 19:56:46.137028  205530 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 19:56:46.143156  205530 system_pods.go:86] 8 kube-system pods found
	I1006 19:56:46.143193  205530 system_pods.go:89] "coredns-66bc5c9577-bns67" [89f11c5e-2682-4227-80ba-2fe8b97c1629] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:56:46.143200  205530 system_pods.go:89] "etcd-default-k8s-diff-port-997276" [721c87e6-60f9-41e8-ac17-f6029a04e17e] Running
	I1006 19:56:46.143207  205530 system_pods.go:89] "kindnet-twtwt" [e281e8b3-9cc4-41fb-8e22-a66ef4e23a38] Running
	I1006 19:56:46.143211  205530 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-997276" [7d10547b-e1b4-4af4-8da7-f4d28894afd4] Running
	I1006 19:56:46.143216  205530 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-997276" [a477444a-eb40-45c8-8588-842a7de0164e] Running
	I1006 19:56:46.143220  205530 system_pods.go:89] "kube-proxy-zl7gg" [05397544-ebaf-4f98-8762-9ede9c706bc9] Running
	I1006 19:56:46.143224  205530 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-997276" [1b4b8d6e-f4d6-40f1-99eb-2d33fc3533a5] Running
	I1006 19:56:46.143229  205530 system_pods.go:89] "storage-provisioner" [3cd050f9-3953-4804-bbda-79ae9e50cf67] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 19:56:46.143251  205530 retry.go:31] will retry after 288.210543ms: missing components: kube-dns
	I1006 19:56:46.435799  205530 system_pods.go:86] 8 kube-system pods found
	I1006 19:56:46.435836  205530 system_pods.go:89] "coredns-66bc5c9577-bns67" [89f11c5e-2682-4227-80ba-2fe8b97c1629] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:56:46.435843  205530 system_pods.go:89] "etcd-default-k8s-diff-port-997276" [721c87e6-60f9-41e8-ac17-f6029a04e17e] Running
	I1006 19:56:46.435850  205530 system_pods.go:89] "kindnet-twtwt" [e281e8b3-9cc4-41fb-8e22-a66ef4e23a38] Running
	I1006 19:56:46.435855  205530 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-997276" [7d10547b-e1b4-4af4-8da7-f4d28894afd4] Running
	I1006 19:56:46.435859  205530 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-997276" [a477444a-eb40-45c8-8588-842a7de0164e] Running
	I1006 19:56:46.435864  205530 system_pods.go:89] "kube-proxy-zl7gg" [05397544-ebaf-4f98-8762-9ede9c706bc9] Running
	I1006 19:56:46.435870  205530 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-997276" [1b4b8d6e-f4d6-40f1-99eb-2d33fc3533a5] Running
	I1006 19:56:46.435887  205530 system_pods.go:89] "storage-provisioner" [3cd050f9-3953-4804-bbda-79ae9e50cf67] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 19:56:46.435906  205530 retry.go:31] will retry after 262.137696ms: missing components: kube-dns
	I1006 19:56:46.704260  205530 system_pods.go:86] 8 kube-system pods found
	I1006 19:56:46.704293  205530 system_pods.go:89] "coredns-66bc5c9577-bns67" [89f11c5e-2682-4227-80ba-2fe8b97c1629] Running
	I1006 19:56:46.704304  205530 system_pods.go:89] "etcd-default-k8s-diff-port-997276" [721c87e6-60f9-41e8-ac17-f6029a04e17e] Running
	I1006 19:56:46.704321  205530 system_pods.go:89] "kindnet-twtwt" [e281e8b3-9cc4-41fb-8e22-a66ef4e23a38] Running
	I1006 19:56:46.704330  205530 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-997276" [7d10547b-e1b4-4af4-8da7-f4d28894afd4] Running
	I1006 19:56:46.704336  205530 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-997276" [a477444a-eb40-45c8-8588-842a7de0164e] Running
	I1006 19:56:46.704340  205530 system_pods.go:89] "kube-proxy-zl7gg" [05397544-ebaf-4f98-8762-9ede9c706bc9] Running
	I1006 19:56:46.704344  205530 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-997276" [1b4b8d6e-f4d6-40f1-99eb-2d33fc3533a5] Running
	I1006 19:56:46.704352  205530 system_pods.go:89] "storage-provisioner" [3cd050f9-3953-4804-bbda-79ae9e50cf67] Running
	I1006 19:56:46.704360  205530 system_pods.go:126] duration metric: took 567.32567ms to wait for k8s-apps to be running ...
	I1006 19:56:46.704372  205530 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 19:56:46.704427  205530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:56:46.726522  205530 system_svc.go:56] duration metric: took 22.139155ms WaitForService to wait for kubelet
	I1006 19:56:46.726552  205530 kubeadm.go:586] duration metric: took 42.014051673s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 19:56:46.726570  205530 node_conditions.go:102] verifying NodePressure condition ...
	I1006 19:56:46.730502  205530 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 19:56:46.730536  205530 node_conditions.go:123] node cpu capacity is 2
	I1006 19:56:46.730560  205530 node_conditions.go:105] duration metric: took 3.984762ms to run NodePressure ...
	I1006 19:56:46.730573  205530 start.go:241] waiting for startup goroutines ...
	I1006 19:56:46.730585  205530 start.go:246] waiting for cluster config update ...
	I1006 19:56:46.730597  205530 start.go:255] writing updated cluster config ...
	I1006 19:56:46.730919  205530 ssh_runner.go:195] Run: rm -f paused
	I1006 19:56:46.734940  205530 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 19:56:46.740056  205530 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bns67" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:46.746379  205530 pod_ready.go:94] pod "coredns-66bc5c9577-bns67" is "Ready"
	I1006 19:56:46.746414  205530 pod_ready.go:86] duration metric: took 6.327633ms for pod "coredns-66bc5c9577-bns67" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:46.749494  205530 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:46.756307  205530 pod_ready.go:94] pod "etcd-default-k8s-diff-port-997276" is "Ready"
	I1006 19:56:46.756334  205530 pod_ready.go:86] duration metric: took 6.800322ms for pod "etcd-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:46.759272  205530 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:46.765452  205530 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-997276" is "Ready"
	I1006 19:56:46.765481  205530 pod_ready.go:86] duration metric: took 6.18555ms for pod "kube-apiserver-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:46.768411  205530 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:47.141709  205530 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-997276" is "Ready"
	I1006 19:56:47.141785  205530 pod_ready.go:86] duration metric: took 373.345577ms for pod "kube-controller-manager-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:47.341356  205530 pod_ready.go:83] waiting for pod "kube-proxy-zl7gg" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:47.740720  205530 pod_ready.go:94] pod "kube-proxy-zl7gg" is "Ready"
	I1006 19:56:47.740750  205530 pod_ready.go:86] duration metric: took 399.314802ms for pod "kube-proxy-zl7gg" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:47.940755  205530 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:48.340582  205530 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-997276" is "Ready"
	I1006 19:56:48.340607  205530 pod_ready.go:86] duration metric: took 399.828626ms for pod "kube-scheduler-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:56:48.340619  205530 pod_ready.go:40] duration metric: took 1.605638798s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 19:56:48.424763  205530 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1006 19:56:48.429795  205530 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-997276" cluster and "default" namespace by default
	I1006 19:56:48.784916  210052 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 19:56:49.145875  210052 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 19:56:50.005498  210052 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 19:56:50.385840  210052 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 19:56:50.864684  210052 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 19:56:50.864791  210052 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 19:56:50.867169  210052 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 19:56:50.871232  210052 out.go:252]   - Booting up control plane ...
	I1006 19:56:50.871355  210052 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 19:56:50.871439  210052 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 19:56:50.871512  210052 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 19:56:50.895846  210052 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 19:56:50.895967  210052 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 19:56:50.901137  210052 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 19:56:50.901471  210052 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 19:56:50.901521  210052 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 19:56:51.052595  210052 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 19:56:51.052978  210052 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 19:56:52.554493  210052 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501866122s
	I1006 19:56:52.558131  210052 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 19:56:52.558230  210052 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1006 19:56:52.558324  210052 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 19:56:52.558630  210052 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 19:56:56.272199  210052 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.713406623s
	I1006 19:56:58.549068  210052 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.990935395s
	I1006 19:56:59.581466  210052 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.019390789s
	I1006 19:56:59.618564  210052 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1006 19:56:59.644612  210052 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1006 19:56:59.671944  210052 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1006 19:56:59.672435  210052 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-988436 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1006 19:56:59.702991  210052 kubeadm.go:318] [bootstrap-token] Using token: 15zm2t.z46xbqnwrxdbne6t
	I1006 19:56:59.706017  210052 out.go:252]   - Configuring RBAC rules ...
	I1006 19:56:59.706145  210052 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1006 19:56:59.716865  210052 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1006 19:56:59.732288  210052 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1006 19:56:59.740344  210052 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1006 19:56:59.745498  210052 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1006 19:56:59.749850  210052 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1006 19:56:59.986179  210052 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1006 19:57:00.883822  210052 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1006 19:57:00.987687  210052 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1006 19:57:00.988965  210052 kubeadm.go:318] 
	I1006 19:57:00.989042  210052 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1006 19:57:00.989048  210052 kubeadm.go:318] 
	I1006 19:57:00.989129  210052 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1006 19:57:00.989134  210052 kubeadm.go:318] 
	I1006 19:57:00.989161  210052 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1006 19:57:00.989222  210052 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1006 19:57:00.989276  210052 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1006 19:57:00.989281  210052 kubeadm.go:318] 
	I1006 19:57:00.989337  210052 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1006 19:57:00.989342  210052 kubeadm.go:318] 
	I1006 19:57:00.989392  210052 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1006 19:57:00.989397  210052 kubeadm.go:318] 
	I1006 19:57:00.989451  210052 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1006 19:57:00.989529  210052 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1006 19:57:00.989600  210052 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1006 19:57:00.989605  210052 kubeadm.go:318] 
	I1006 19:57:00.989694  210052 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1006 19:57:00.989774  210052 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1006 19:57:00.989778  210052 kubeadm.go:318] 
	I1006 19:57:00.989866  210052 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 15zm2t.z46xbqnwrxdbne6t \
	I1006 19:57:00.989982  210052 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:201c13f59a2d25d619915dd6f8e3b060debb373b102f8c8209c331adce5a89dd \
	I1006 19:57:00.990005  210052 kubeadm.go:318] 	--control-plane 
	I1006 19:57:00.990034  210052 kubeadm.go:318] 
	I1006 19:57:00.990124  210052 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1006 19:57:00.990129  210052 kubeadm.go:318] 
	I1006 19:57:00.990215  210052 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 15zm2t.z46xbqnwrxdbne6t \
	I1006 19:57:00.990322  210052 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:201c13f59a2d25d619915dd6f8e3b060debb373b102f8c8209c331adce5a89dd 
	I1006 19:57:00.994933  210052 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1006 19:57:00.995178  210052 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1006 19:57:00.995288  210052 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 19:57:00.995304  210052 cni.go:84] Creating CNI manager for ""
	I1006 19:57:00.995311  210052 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:57:00.998440  210052 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1006 19:57:01.001425  210052 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1006 19:57:01.006468  210052 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1006 19:57:01.006486  210052 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1006 19:57:01.024597  210052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1006 19:57:01.366417  210052 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1006 19:57:01.366496  210052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:57:01.366608  210052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-988436 minikube.k8s.io/updated_at=2025_10_06T19_57_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81 minikube.k8s.io/name=newest-cni-988436 minikube.k8s.io/primary=true
	I1006 19:57:01.492560  210052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:57:01.569017  210052 ops.go:34] apiserver oom_adj: -16
	I1006 19:57:01.993358  210052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:57:02.493477  210052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:57:02.993379  210052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:57:03.492763  210052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:57:03.992973  210052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:57:04.493168  210052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:57:04.993549  210052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:57:05.493261  210052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:57:05.588566  210052 kubeadm.go:1113] duration metric: took 4.222124622s to wait for elevateKubeSystemPrivileges
	I1006 19:57:05.588604  210052 kubeadm.go:402] duration metric: took 23.914718366s to StartCluster
	I1006 19:57:05.588624  210052 settings.go:142] acquiring lock: {Name:mkaade96c66593c653256adaeb0a3029ca60e0c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:57:05.588688  210052 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:57:05.589622  210052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/kubeconfig: {Name:mkf8cce8f9dace5c4d41967d6ad8df5e03ba53f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:57:05.589851  210052 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 19:57:05.589966  210052 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1006 19:57:05.590225  210052 config.go:182] Loaded profile config "newest-cni-988436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:57:05.590267  210052 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 19:57:05.590329  210052 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-988436"
	I1006 19:57:05.590349  210052 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-988436"
	I1006 19:57:05.590375  210052 host.go:66] Checking if "newest-cni-988436" exists ...
	I1006 19:57:05.591138  210052 cli_runner.go:164] Run: docker container inspect newest-cni-988436 --format={{.State.Status}}
	I1006 19:57:05.591575  210052 addons.go:69] Setting default-storageclass=true in profile "newest-cni-988436"
	I1006 19:57:05.591603  210052 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-988436"
	I1006 19:57:05.591915  210052 cli_runner.go:164] Run: docker container inspect newest-cni-988436 --format={{.State.Status}}
	I1006 19:57:05.596180  210052 out.go:179] * Verifying Kubernetes components...
	I1006 19:57:05.601921  210052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:57:05.625763  210052 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 19:57:05.628687  210052 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 19:57:05.628710  210052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 19:57:05.628782  210052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:57:05.642367  210052 addons.go:238] Setting addon default-storageclass=true in "newest-cni-988436"
	I1006 19:57:05.642411  210052 host.go:66] Checking if "newest-cni-988436" exists ...
	I1006 19:57:05.642831  210052 cli_runner.go:164] Run: docker container inspect newest-cni-988436 --format={{.State.Status}}
	I1006 19:57:05.661109  210052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa Username:docker}
	I1006 19:57:05.679810  210052 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 19:57:05.679837  210052 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 19:57:05.679929  210052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:57:05.722582  210052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa Username:docker}
	I1006 19:57:05.855206  210052 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1006 19:57:05.883518  210052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:57:05.919500  210052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 19:57:05.957901  210052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 19:57:06.476890  210052 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1006 19:57:06.478867  210052 api_server.go:52] waiting for apiserver process to appear ...
	I1006 19:57:06.478932  210052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 19:57:06.902535  210052 api_server.go:72] duration metric: took 1.31264965s to wait for apiserver process to appear ...
	I1006 19:57:06.902561  210052 api_server.go:88] waiting for apiserver healthz status ...
	I1006 19:57:06.902577  210052 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1006 19:57:06.919525  210052 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1006 19:57:06.921771  210052 api_server.go:141] control plane version: v1.34.1
	I1006 19:57:06.921796  210052 api_server.go:131] duration metric: took 19.228079ms to wait for apiserver health ...
	I1006 19:57:06.921806  210052 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 19:57:06.924076  210052 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1006 19:57:06.925845  210052 system_pods.go:59] 8 kube-system pods found
	I1006 19:57:06.925881  210052 system_pods.go:61] "coredns-66bc5c9577-z6drc" [4f782721-b2ed-4a40-9181-d83ac1315d08] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1006 19:57:06.925889  210052 system_pods.go:61] "etcd-newest-cni-988436" [b27477b1-584a-48dd-964f-383c3f41e66f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 19:57:06.925897  210052 system_pods.go:61] "kindnet-v4krt" [8b2c3ef8-c3bb-4e24-a72a-5a696590f257] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1006 19:57:06.925903  210052 system_pods.go:61] "kube-apiserver-newest-cni-988436" [21da4d62-1ceb-4988-a95f-d00aeed96f63] Running
	I1006 19:57:06.925915  210052 system_pods.go:61] "kube-controller-manager-newest-cni-988436" [b07be9c3-cd1b-4026-8c91-76ab67ef61df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 19:57:06.925921  210052 system_pods.go:61] "kube-proxy-wsgmd" [b2289712-8aa7-4ef1-909f-02322c74d8ee] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1006 19:57:06.925927  210052 system_pods.go:61] "kube-scheduler-newest-cni-988436" [e8d89cea-a1d3-4c7b-ac59-50ea8df07dcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 19:57:06.925932  210052 system_pods.go:61] "storage-provisioner" [6120daa3-8711-44b0-8951-f629755eb03c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1006 19:57:06.925938  210052 system_pods.go:74] duration metric: took 4.125893ms to wait for pod list to return data ...
	I1006 19:57:06.925946  210052 default_sa.go:34] waiting for default service account to be created ...
	I1006 19:57:06.926899  210052 addons.go:514] duration metric: took 1.336613587s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1006 19:57:06.929069  210052 default_sa.go:45] found service account: "default"
	I1006 19:57:06.929095  210052 default_sa.go:55] duration metric: took 3.142843ms for default service account to be created ...
	I1006 19:57:06.929108  210052 kubeadm.go:586] duration metric: took 1.33922614s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1006 19:57:06.929130  210052 node_conditions.go:102] verifying NodePressure condition ...
	I1006 19:57:06.932358  210052 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 19:57:06.932388  210052 node_conditions.go:123] node cpu capacity is 2
	I1006 19:57:06.932401  210052 node_conditions.go:105] duration metric: took 3.265324ms to run NodePressure ...
	I1006 19:57:06.932413  210052 start.go:241] waiting for startup goroutines ...
	I1006 19:57:06.981191  210052 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-988436" context rescaled to 1 replicas
	I1006 19:57:06.981233  210052 start.go:246] waiting for cluster config update ...
	I1006 19:57:06.981247  210052 start.go:255] writing updated cluster config ...
	I1006 19:57:06.981569  210052 ssh_runner.go:195] Run: rm -f paused
	I1006 19:57:07.046498  210052 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1006 19:57:07.050073  210052 out.go:179] * Done! kubectl is now configured to use "newest-cni-988436" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 06 19:57:06 newest-cni-988436 crio[836]: time="2025-10-06T19:57:06.50165006Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:57:06 newest-cni-988436 crio[836]: time="2025-10-06T19:57:06.512820465Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=62e1d514-b98c-433a-ac4a-1a6d24993279 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 19:57:06 newest-cni-988436 crio[836]: time="2025-10-06T19:57:06.516068008Z" level=info msg="Ran pod sandbox e9c2893520846eb3efe579c0ade1bc37212c3c5d3499a4e5e91bcc2184bc76a3 with infra container: kube-system/kube-proxy-wsgmd/POD" id=62e1d514-b98c-433a-ac4a-1a6d24993279 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 19:57:06 newest-cni-988436 crio[836]: time="2025-10-06T19:57:06.517299415Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=5034c9f7-eec1-49e7-ae48-b09af979ef60 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:57:06 newest-cni-988436 crio[836]: time="2025-10-06T19:57:06.518457369Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=82dba499-b4e3-4887-be3a-7540cc202f51 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:57:06 newest-cni-988436 crio[836]: time="2025-10-06T19:57:06.533870502Z" level=info msg="Creating container: kube-system/kube-proxy-wsgmd/kube-proxy" id=de0064c4-79a4-418f-9e9e-6791ec0eed73 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:57:06 newest-cni-988436 crio[836]: time="2025-10-06T19:57:06.534192223Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:57:06 newest-cni-988436 crio[836]: time="2025-10-06T19:57:06.544027317Z" level=info msg="Running pod sandbox: kube-system/kindnet-v4krt/POD" id=1bc43c70-8b76-4ca0-af2e-5c893955a2a9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 19:57:06 newest-cni-988436 crio[836]: time="2025-10-06T19:57:06.544099121Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:57:06 newest-cni-988436 crio[836]: time="2025-10-06T19:57:06.551647888Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:57:06 newest-cni-988436 crio[836]: time="2025-10-06T19:57:06.552969849Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:57:06 newest-cni-988436 crio[836]: time="2025-10-06T19:57:06.560936281Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1bc43c70-8b76-4ca0-af2e-5c893955a2a9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 19:57:06 newest-cni-988436 crio[836]: time="2025-10-06T19:57:06.596534952Z" level=info msg="Ran pod sandbox 349a5fe2f1e238ebf27d6f4606699050c2428a557a1a37d616aa955573313af3 with infra container: kube-system/kindnet-v4krt/POD" id=1bc43c70-8b76-4ca0-af2e-5c893955a2a9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 19:57:06 newest-cni-988436 crio[836]: time="2025-10-06T19:57:06.597972804Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=080b05d3-f4fe-4a98-b15f-b8d4c851d8ec name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:57:06 newest-cni-988436 crio[836]: time="2025-10-06T19:57:06.600941416Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=31c2ddb9-e6f4-49bb-bc54-c57d66b996db name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:57:06 newest-cni-988436 crio[836]: time="2025-10-06T19:57:06.626077363Z" level=info msg="Creating container: kube-system/kindnet-v4krt/kindnet-cni" id=bfd7979a-94ac-4093-ba5c-ea0db8d5e88b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:57:06 newest-cni-988436 crio[836]: time="2025-10-06T19:57:06.626615687Z" level=info msg="Created container 819660a37b0ac14be450c2e3180675da37ecd23d27fc6ca3652348692dfdb62d: kube-system/kube-proxy-wsgmd/kube-proxy" id=de0064c4-79a4-418f-9e9e-6791ec0eed73 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:57:06 newest-cni-988436 crio[836]: time="2025-10-06T19:57:06.628668303Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:57:06 newest-cni-988436 crio[836]: time="2025-10-06T19:57:06.630454067Z" level=info msg="Starting container: 819660a37b0ac14be450c2e3180675da37ecd23d27fc6ca3652348692dfdb62d" id=6431b789-9be7-4bd3-9a0b-029ddfc81166 name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 19:57:06 newest-cni-988436 crio[836]: time="2025-10-06T19:57:06.635003156Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:57:06 newest-cni-988436 crio[836]: time="2025-10-06T19:57:06.635739277Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:57:06 newest-cni-988436 crio[836]: time="2025-10-06T19:57:06.639679Z" level=info msg="Started container" PID=1497 containerID=819660a37b0ac14be450c2e3180675da37ecd23d27fc6ca3652348692dfdb62d description=kube-system/kube-proxy-wsgmd/kube-proxy id=6431b789-9be7-4bd3-9a0b-029ddfc81166 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e9c2893520846eb3efe579c0ade1bc37212c3c5d3499a4e5e91bcc2184bc76a3
	Oct 06 19:57:06 newest-cni-988436 crio[836]: time="2025-10-06T19:57:06.675543079Z" level=info msg="Created container 8bd1cff48ea3ec363980ecc7de9cd38d53f657f03a625fbbce3b9c89a79b150e: kube-system/kindnet-v4krt/kindnet-cni" id=bfd7979a-94ac-4093-ba5c-ea0db8d5e88b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:57:06 newest-cni-988436 crio[836]: time="2025-10-06T19:57:06.678321362Z" level=info msg="Starting container: 8bd1cff48ea3ec363980ecc7de9cd38d53f657f03a625fbbce3b9c89a79b150e" id=1f01fe85-5127-41ba-9528-cbfbfe420e16 name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 19:57:06 newest-cni-988436 crio[836]: time="2025-10-06T19:57:06.680210275Z" level=info msg="Started container" PID=1508 containerID=8bd1cff48ea3ec363980ecc7de9cd38d53f657f03a625fbbce3b9c89a79b150e description=kube-system/kindnet-v4krt/kindnet-cni id=1f01fe85-5127-41ba-9528-cbfbfe420e16 name=/runtime.v1.RuntimeService/StartContainer sandboxID=349a5fe2f1e238ebf27d6f4606699050c2428a557a1a37d616aa955573313af3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	8bd1cff48ea3e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   1 second ago        Running             kindnet-cni               0                   349a5fe2f1e23       kindnet-v4krt                               kube-system
	819660a37b0ac       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   1 second ago        Running             kube-proxy                0                   e9c2893520846       kube-proxy-wsgmd                            kube-system
	72e269f022f25       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 seconds ago      Running             etcd                      0                   e4f2ec2f8035e       etcd-newest-cni-988436                      kube-system
	33bc78e649e39       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   15 seconds ago      Running             kube-apiserver            0                   c2429cd54d758       kube-apiserver-newest-cni-988436            kube-system
	66181f83c4d66       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   15 seconds ago      Running             kube-scheduler            0                   b2910c3d292d7       kube-scheduler-newest-cni-988436            kube-system
	892bc7ebb03f4       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago      Running             kube-controller-manager   0                   2ded5d1d56605       kube-controller-manager-newest-cni-988436   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-988436
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-988436
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81
	                    minikube.k8s.io/name=newest-cni-988436
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_06T19_57_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 06 Oct 2025 19:56:57 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-988436
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 06 Oct 2025 19:57:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 06 Oct 2025 19:57:01 +0000   Mon, 06 Oct 2025 19:56:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 06 Oct 2025 19:57:01 +0000   Mon, 06 Oct 2025 19:56:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 06 Oct 2025 19:57:01 +0000   Mon, 06 Oct 2025 19:56:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 06 Oct 2025 19:57:01 +0000   Mon, 06 Oct 2025 19:56:53 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-988436
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 df3b81f742e241a9ae09f39ac56cf610
	  System UUID:                073e7007-fd42-4128-9331-8ee710c3ffcc
	  Boot ID:                    28123a58-a718-41b5-bac7-da83ff2d3a0d
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-988436                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9s
	  kube-system                 kindnet-v4krt                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-988436             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-988436    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-wsgmd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-988436             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 1s                 kube-proxy       
	  Normal   Starting                 16s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 16s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  16s (x8 over 16s)  kubelet          Node newest-cni-988436 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16s (x8 over 16s)  kubelet          Node newest-cni-988436 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16s (x8 over 16s)  kubelet          Node newest-cni-988436 status is now: NodeHasSufficientPID
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 8s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7s                 kubelet          Node newest-cni-988436 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7s                 kubelet          Node newest-cni-988436 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7s                 kubelet          Node newest-cni-988436 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-988436 event: Registered Node newest-cni-988436 in Controller
	
	
	==> dmesg <==
	[Oct 6 19:26] overlayfs: idmapped layers are currently not supported
	[ +26.009516] overlayfs: idmapped layers are currently not supported
	[  +1.884868] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:28] overlayfs: idmapped layers are currently not supported
	[ +38.864395] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:29] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:30] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:31] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:35] overlayfs: idmapped layers are currently not supported
	[ +30.685217] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:37] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:39] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:41] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:48] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:49] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:50] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:52] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:54] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:55] overlayfs: idmapped layers are currently not supported
	[ +29.761517] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:56] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [72e269f022f254f08ad2c4279dfa963714a590ab2179ca1bc546c0af2eee22e7] <==
	{"level":"warn","ts":"2025-10-06T19:56:55.165260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:56:55.190317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:56:55.204164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:56:55.228563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:56:55.246572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:56:55.259775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:56:55.283638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:56:55.294017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:56:55.319750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:56:55.351044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:56:55.379599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:56:55.419909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:56:55.439005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:56:55.468632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:56:55.497822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:56:55.536835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:56:55.568243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:56:55.587933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:56:55.605406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:56:55.628016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:56:55.661883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:56:55.735354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:56:55.762305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:56:55.789460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:56:55.892001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35398","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:57:08 up  1:39,  0 user,  load average: 3.68, 2.92, 2.17
	Linux newest-cni-988436 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8bd1cff48ea3ec363980ecc7de9cd38d53f657f03a625fbbce3b9c89a79b150e] <==
	I1006 19:57:06.757613       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1006 19:57:06.757971       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1006 19:57:06.758086       1 main.go:148] setting mtu 1500 for CNI 
	I1006 19:57:06.758098       1 main.go:178] kindnetd IP family: "ipv4"
	I1006 19:57:06.758111       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-06T19:57:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1006 19:57:06.955225       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1006 19:57:07.044118       1 controller.go:381] "Waiting for informer caches to sync"
	I1006 19:57:07.044221       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1006 19:57:07.044482       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [33bc78e649e3964a059ea4b986a5c310d98015d2be45b84cc217ed7b8daa3938] <==
	I1006 19:56:57.164048       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1006 19:56:57.164140       1 policy_source.go:240] refreshing policies
	I1006 19:56:57.169111       1 controller.go:667] quota admission added evaluator for: namespaces
	I1006 19:56:57.298140       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1006 19:56:57.318337       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1006 19:56:57.373361       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1006 19:56:57.373545       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1006 19:56:57.434458       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1006 19:56:57.654494       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1006 19:56:57.667974       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1006 19:56:57.668069       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1006 19:56:59.059328       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1006 19:56:59.130412       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1006 19:56:59.228630       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1006 19:56:59.237607       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1006 19:56:59.238980       1 controller.go:667] quota admission added evaluator for: endpoints
	I1006 19:56:59.246023       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1006 19:57:00.176582       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1006 19:57:00.845327       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1006 19:57:00.882599       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1006 19:57:00.932149       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1006 19:57:05.897726       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1006 19:57:06.060491       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1006 19:57:06.072658       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1006 19:57:06.138978       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [892bc7ebb03f426c4f769ab62dbf3c8e5fa8fb69df041445eead2e69f4627da3] <==
	I1006 19:57:05.178770       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1006 19:57:05.179963       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1006 19:57:05.180036       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1006 19:57:05.180061       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1006 19:57:05.180038       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1006 19:57:05.180129       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 19:57:05.180205       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1006 19:57:05.180299       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1006 19:57:05.180743       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1006 19:57:05.184477       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1006 19:57:05.184898       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1006 19:57:05.184946       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1006 19:57:05.197188       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1006 19:57:05.202551       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1006 19:57:05.208848       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1006 19:57:05.211076       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1006 19:57:05.216668       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1006 19:57:05.230277       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1006 19:57:05.233546       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1006 19:57:05.245972       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1006 19:57:05.247016       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1006 19:57:05.251368       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 19:57:05.251390       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1006 19:57:05.251399       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1006 19:57:06.615415       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/coredns-66bc5c9577\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-66bc5c9577\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	
	
	==> kube-proxy [819660a37b0ac14be450c2e3180675da37ecd23d27fc6ca3652348692dfdb62d] <==
	I1006 19:57:06.721302       1 server_linux.go:53] "Using iptables proxy"
	I1006 19:57:06.823794       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1006 19:57:06.928616       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1006 19:57:06.928744       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1006 19:57:06.929177       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1006 19:57:06.952338       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 19:57:06.952460       1 server_linux.go:132] "Using iptables Proxier"
	I1006 19:57:06.957953       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1006 19:57:06.958321       1 server.go:527] "Version info" version="v1.34.1"
	I1006 19:57:06.958515       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:57:06.960051       1 config.go:200] "Starting service config controller"
	I1006 19:57:06.960107       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1006 19:57:06.960148       1 config.go:106] "Starting endpoint slice config controller"
	I1006 19:57:06.960175       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1006 19:57:06.960211       1 config.go:403] "Starting serviceCIDR config controller"
	I1006 19:57:06.960236       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1006 19:57:06.960918       1 config.go:309] "Starting node config controller"
	I1006 19:57:06.963250       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1006 19:57:06.963313       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1006 19:57:07.061110       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1006 19:57:07.061208       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1006 19:57:07.061272       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [66181f83c4d66b60611b30133b7f9f1329fb67d99ae59e425fad8eb50118bdb1] <==
	I1006 19:56:58.478425       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:56:58.490644       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1006 19:56:58.490757       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:56:58.490779       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:56:58.490805       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1006 19:56:58.523997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1006 19:56:58.524872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1006 19:56:58.524918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1006 19:56:58.524980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1006 19:56:58.525070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1006 19:56:58.525145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1006 19:56:58.533276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1006 19:56:58.533662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1006 19:56:58.533757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1006 19:56:58.533811       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1006 19:56:58.533866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1006 19:56:58.533907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1006 19:56:58.546071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1006 19:56:58.546161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1006 19:56:58.546382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1006 19:56:58.546800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1006 19:56:58.546861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1006 19:56:58.546918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1006 19:56:58.547003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1006 19:57:00.193633       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 06 19:57:01 newest-cni-988436 kubelet[1311]: I1006 19:57:01.330484    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f9051b03d4dc46ca032984beae1a994a-k8s-certs\") pod \"kube-apiserver-newest-cni-988436\" (UID: \"f9051b03d4dc46ca032984beae1a994a\") " pod="kube-system/kube-apiserver-newest-cni-988436"
	Oct 06 19:57:01 newest-cni-988436 kubelet[1311]: I1006 19:57:01.330502    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f9051b03d4dc46ca032984beae1a994a-usr-share-ca-certificates\") pod \"kube-apiserver-newest-cni-988436\" (UID: \"f9051b03d4dc46ca032984beae1a994a\") " pod="kube-system/kube-apiserver-newest-cni-988436"
	Oct 06 19:57:01 newest-cni-988436 kubelet[1311]: I1006 19:57:01.874647    1311 apiserver.go:52] "Watching apiserver"
	Oct 06 19:57:01 newest-cni-988436 kubelet[1311]: I1006 19:57:01.928418    1311 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 06 19:57:02 newest-cni-988436 kubelet[1311]: I1006 19:57:02.043750    1311 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-988436"
	Oct 06 19:57:02 newest-cni-988436 kubelet[1311]: I1006 19:57:02.044340    1311 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-988436"
	Oct 06 19:57:02 newest-cni-988436 kubelet[1311]: E1006 19:57:02.068813    1311 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-988436\" already exists" pod="kube-system/kube-scheduler-newest-cni-988436"
	Oct 06 19:57:02 newest-cni-988436 kubelet[1311]: E1006 19:57:02.069202    1311 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-988436\" already exists" pod="kube-system/etcd-newest-cni-988436"
	Oct 06 19:57:02 newest-cni-988436 kubelet[1311]: I1006 19:57:02.085121    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-988436" podStartSLOduration=3.085102239 podStartE2EDuration="3.085102239s" podCreationTimestamp="2025-10-06 19:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 19:57:02.068676378 +0000 UTC m=+1.307762110" watchObservedRunningTime="2025-10-06 19:57:02.085102239 +0000 UTC m=+1.324187971"
	Oct 06 19:57:02 newest-cni-988436 kubelet[1311]: I1006 19:57:02.085347    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-988436" podStartSLOduration=1.085340709 podStartE2EDuration="1.085340709s" podCreationTimestamp="2025-10-06 19:57:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 19:57:02.084880147 +0000 UTC m=+1.323965895" watchObservedRunningTime="2025-10-06 19:57:02.085340709 +0000 UTC m=+1.324426433"
	Oct 06 19:57:02 newest-cni-988436 kubelet[1311]: I1006 19:57:02.125517    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-988436" podStartSLOduration=2.12550003 podStartE2EDuration="2.12550003s" podCreationTimestamp="2025-10-06 19:57:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 19:57:02.106273074 +0000 UTC m=+1.345358823" watchObservedRunningTime="2025-10-06 19:57:02.12550003 +0000 UTC m=+1.364585770"
	Oct 06 19:57:02 newest-cni-988436 kubelet[1311]: I1006 19:57:02.143046    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-988436" podStartSLOduration=1.14302672 podStartE2EDuration="1.14302672s" podCreationTimestamp="2025-10-06 19:57:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 19:57:02.126095544 +0000 UTC m=+1.365181268" watchObservedRunningTime="2025-10-06 19:57:02.14302672 +0000 UTC m=+1.382112460"
	Oct 06 19:57:05 newest-cni-988436 kubelet[1311]: I1006 19:57:05.212424    1311 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 06 19:57:05 newest-cni-988436 kubelet[1311]: I1006 19:57:05.213352    1311 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 06 19:57:06 newest-cni-988436 kubelet[1311]: I1006 19:57:06.257137    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2289712-8aa7-4ef1-909f-02322c74d8ee-xtables-lock\") pod \"kube-proxy-wsgmd\" (UID: \"b2289712-8aa7-4ef1-909f-02322c74d8ee\") " pod="kube-system/kube-proxy-wsgmd"
	Oct 06 19:57:06 newest-cni-988436 kubelet[1311]: I1006 19:57:06.257196    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b2c3ef8-c3bb-4e24-a72a-5a696590f257-xtables-lock\") pod \"kindnet-v4krt\" (UID: \"8b2c3ef8-c3bb-4e24-a72a-5a696590f257\") " pod="kube-system/kindnet-v4krt"
	Oct 06 19:57:06 newest-cni-988436 kubelet[1311]: I1006 19:57:06.257220    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4hpm\" (UniqueName: \"kubernetes.io/projected/8b2c3ef8-c3bb-4e24-a72a-5a696590f257-kube-api-access-l4hpm\") pod \"kindnet-v4krt\" (UID: \"8b2c3ef8-c3bb-4e24-a72a-5a696590f257\") " pod="kube-system/kindnet-v4krt"
	Oct 06 19:57:06 newest-cni-988436 kubelet[1311]: I1006 19:57:06.257243    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b2289712-8aa7-4ef1-909f-02322c74d8ee-kube-proxy\") pod \"kube-proxy-wsgmd\" (UID: \"b2289712-8aa7-4ef1-909f-02322c74d8ee\") " pod="kube-system/kube-proxy-wsgmd"
	Oct 06 19:57:06 newest-cni-988436 kubelet[1311]: I1006 19:57:06.257269    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2289712-8aa7-4ef1-909f-02322c74d8ee-lib-modules\") pod \"kube-proxy-wsgmd\" (UID: \"b2289712-8aa7-4ef1-909f-02322c74d8ee\") " pod="kube-system/kube-proxy-wsgmd"
	Oct 06 19:57:06 newest-cni-988436 kubelet[1311]: I1006 19:57:06.257291    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc284\" (UniqueName: \"kubernetes.io/projected/b2289712-8aa7-4ef1-909f-02322c74d8ee-kube-api-access-hc284\") pod \"kube-proxy-wsgmd\" (UID: \"b2289712-8aa7-4ef1-909f-02322c74d8ee\") " pod="kube-system/kube-proxy-wsgmd"
	Oct 06 19:57:06 newest-cni-988436 kubelet[1311]: I1006 19:57:06.257308    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b2c3ef8-c3bb-4e24-a72a-5a696590f257-lib-modules\") pod \"kindnet-v4krt\" (UID: \"8b2c3ef8-c3bb-4e24-a72a-5a696590f257\") " pod="kube-system/kindnet-v4krt"
	Oct 06 19:57:06 newest-cni-988436 kubelet[1311]: I1006 19:57:06.257332    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8b2c3ef8-c3bb-4e24-a72a-5a696590f257-cni-cfg\") pod \"kindnet-v4krt\" (UID: \"8b2c3ef8-c3bb-4e24-a72a-5a696590f257\") " pod="kube-system/kindnet-v4krt"
	Oct 06 19:57:06 newest-cni-988436 kubelet[1311]: I1006 19:57:06.425824    1311 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 06 19:57:06 newest-cni-988436 kubelet[1311]: W1006 19:57:06.586633    1311 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9b780de2752c967e18231ad401a9aab982a50b731cd608d9fd3351a30367d6fd/crio-349a5fe2f1e238ebf27d6f4606699050c2428a557a1a37d616aa955573313af3 WatchSource:0}: Error finding container 349a5fe2f1e238ebf27d6f4606699050c2428a557a1a37d616aa955573313af3: Status 404 returned error can't find the container with id 349a5fe2f1e238ebf27d6f4606699050c2428a557a1a37d616aa955573313af3
	Oct 06 19:57:07 newest-cni-988436 kubelet[1311]: I1006 19:57:07.091528    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-v4krt" podStartSLOduration=1.091511306 podStartE2EDuration="1.091511306s" podCreationTimestamp="2025-10-06 19:57:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 19:57:07.091370947 +0000 UTC m=+6.330456687" watchObservedRunningTime="2025-10-06 19:57:07.091511306 +0000 UTC m=+6.330597030"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-988436 -n newest-cni-988436
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-988436 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-z6drc storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-988436 describe pod coredns-66bc5c9577-z6drc storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-988436 describe pod coredns-66bc5c9577-z6drc storage-provisioner: exit status 1 (79.001309ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-z6drc" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-988436 describe pod coredns-66bc5c9577-z6drc storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (7.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-988436 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-988436 --alsologtostderr -v=1: exit status 80 (2.331536358s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-988436 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 19:57:33.300869  217337 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:57:33.302021  217337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:57:33.302052  217337 out.go:374] Setting ErrFile to fd 2...
	I1006 19:57:33.302074  217337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:57:33.302386  217337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:57:33.302681  217337 out.go:368] Setting JSON to false
	I1006 19:57:33.307878  217337 mustload.go:65] Loading cluster: newest-cni-988436
	I1006 19:57:33.308361  217337 config.go:182] Loaded profile config "newest-cni-988436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:57:33.308934  217337 cli_runner.go:164] Run: docker container inspect newest-cni-988436 --format={{.State.Status}}
	I1006 19:57:33.337368  217337 host.go:66] Checking if "newest-cni-988436" exists ...
	I1006 19:57:33.337691  217337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:57:33.463574  217337 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-06 19:57:33.452239835 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:57:33.464334  217337 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-988436 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1006 19:57:33.467680  217337 out.go:179] * Pausing node newest-cni-988436 ... 
	I1006 19:57:33.471294  217337 host.go:66] Checking if "newest-cni-988436" exists ...
	I1006 19:57:33.471624  217337 ssh_runner.go:195] Run: systemctl --version
	I1006 19:57:33.471668  217337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:57:33.497830  217337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa Username:docker}
	I1006 19:57:33.638650  217337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:57:33.668191  217337 pause.go:51] kubelet running: true
	I1006 19:57:33.668267  217337 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1006 19:57:34.058944  217337 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1006 19:57:34.059033  217337 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1006 19:57:34.222273  217337 cri.go:89] found id: "00e8681df5f941509c743b69461398538f9a51c14d6c19d45910d346d99d8d58"
	I1006 19:57:34.222302  217337 cri.go:89] found id: "5dfc72bf385ca72f9ef0e7c8285dbd5e03723e68c0e33f9d091cd08c682f638b"
	I1006 19:57:34.222309  217337 cri.go:89] found id: "f34d65961eeb0e039a90462d44820d5cb40290d6829def8b49b70d9904b4e966"
	I1006 19:57:34.222313  217337 cri.go:89] found id: "134549c8cebf10c064c8c3afaf8ea9bc7932630b60156ffdc5ba7e9afcd15c21"
	I1006 19:57:34.222317  217337 cri.go:89] found id: "edfddc0e3f8d1c28acd50b5fa86583ed3677369806e4f0393d57c1cb3eba08dd"
	I1006 19:57:34.222321  217337 cri.go:89] found id: "ef74cfcf93f82ed60cc7dc8b6e3da6bd2c5e5437877a94ad1f08dfe2f858aa0c"
	I1006 19:57:34.222324  217337 cri.go:89] found id: ""
	I1006 19:57:34.222466  217337 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 19:57:34.238042  217337 retry.go:31] will retry after 309.603018ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:57:34Z" level=error msg="open /run/runc: no such file or directory"
	I1006 19:57:34.548717  217337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:57:34.565227  217337 pause.go:51] kubelet running: false
	I1006 19:57:34.565290  217337 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1006 19:57:34.721397  217337 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1006 19:57:34.721486  217337 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1006 19:57:34.789291  217337 cri.go:89] found id: "00e8681df5f941509c743b69461398538f9a51c14d6c19d45910d346d99d8d58"
	I1006 19:57:34.789313  217337 cri.go:89] found id: "5dfc72bf385ca72f9ef0e7c8285dbd5e03723e68c0e33f9d091cd08c682f638b"
	I1006 19:57:34.789319  217337 cri.go:89] found id: "f34d65961eeb0e039a90462d44820d5cb40290d6829def8b49b70d9904b4e966"
	I1006 19:57:34.789323  217337 cri.go:89] found id: "134549c8cebf10c064c8c3afaf8ea9bc7932630b60156ffdc5ba7e9afcd15c21"
	I1006 19:57:34.789327  217337 cri.go:89] found id: "edfddc0e3f8d1c28acd50b5fa86583ed3677369806e4f0393d57c1cb3eba08dd"
	I1006 19:57:34.789356  217337 cri.go:89] found id: "ef74cfcf93f82ed60cc7dc8b6e3da6bd2c5e5437877a94ad1f08dfe2f858aa0c"
	I1006 19:57:34.789367  217337 cri.go:89] found id: ""
	I1006 19:57:34.789425  217337 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 19:57:34.800989  217337 retry.go:31] will retry after 413.619312ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:57:34Z" level=error msg="open /run/runc: no such file or directory"
	I1006 19:57:35.215613  217337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:57:35.229677  217337 pause.go:51] kubelet running: false
	I1006 19:57:35.229738  217337 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1006 19:57:35.433262  217337 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1006 19:57:35.433335  217337 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1006 19:57:35.524406  217337 cri.go:89] found id: "00e8681df5f941509c743b69461398538f9a51c14d6c19d45910d346d99d8d58"
	I1006 19:57:35.524427  217337 cri.go:89] found id: "5dfc72bf385ca72f9ef0e7c8285dbd5e03723e68c0e33f9d091cd08c682f638b"
	I1006 19:57:35.524432  217337 cri.go:89] found id: "f34d65961eeb0e039a90462d44820d5cb40290d6829def8b49b70d9904b4e966"
	I1006 19:57:35.524436  217337 cri.go:89] found id: "134549c8cebf10c064c8c3afaf8ea9bc7932630b60156ffdc5ba7e9afcd15c21"
	I1006 19:57:35.524439  217337 cri.go:89] found id: "edfddc0e3f8d1c28acd50b5fa86583ed3677369806e4f0393d57c1cb3eba08dd"
	I1006 19:57:35.524443  217337 cri.go:89] found id: "ef74cfcf93f82ed60cc7dc8b6e3da6bd2c5e5437877a94ad1f08dfe2f858aa0c"
	I1006 19:57:35.524446  217337 cri.go:89] found id: ""
	I1006 19:57:35.524506  217337 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 19:57:35.540800  217337 out.go:203] 
	W1006 19:57:35.543798  217337 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:57:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:57:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1006 19:57:35.543821  217337 out.go:285] * 
	* 
	W1006 19:57:35.549004  217337 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 19:57:35.551788  217337 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-988436 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-988436
helpers_test.go:243: (dbg) docker inspect newest-cni-988436:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9b780de2752c967e18231ad401a9aab982a50b731cd608d9fd3351a30367d6fd",
	        "Created": "2025-10-06T19:56:32.85241989Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213996,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T19:57:11.13183267Z",
	            "FinishedAt": "2025-10-06T19:57:10.181999701Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/9b780de2752c967e18231ad401a9aab982a50b731cd608d9fd3351a30367d6fd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9b780de2752c967e18231ad401a9aab982a50b731cd608d9fd3351a30367d6fd/hostname",
	        "HostsPath": "/var/lib/docker/containers/9b780de2752c967e18231ad401a9aab982a50b731cd608d9fd3351a30367d6fd/hosts",
	        "LogPath": "/var/lib/docker/containers/9b780de2752c967e18231ad401a9aab982a50b731cd608d9fd3351a30367d6fd/9b780de2752c967e18231ad401a9aab982a50b731cd608d9fd3351a30367d6fd-json.log",
	        "Name": "/newest-cni-988436",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-988436:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-988436",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9b780de2752c967e18231ad401a9aab982a50b731cd608d9fd3351a30367d6fd",
	                "LowerDir": "/var/lib/docker/overlay2/5c259bf9ca1cd1f255780dfe0febf52c4b8906ab233552c3762d9ba362419316-init/diff:/var/lib/docker/overlay2/950dad8a7486c25883a9845454dded3949e8f3a53166d005e6749ff210bcab80/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5c259bf9ca1cd1f255780dfe0febf52c4b8906ab233552c3762d9ba362419316/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5c259bf9ca1cd1f255780dfe0febf52c4b8906ab233552c3762d9ba362419316/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5c259bf9ca1cd1f255780dfe0febf52c4b8906ab233552c3762d9ba362419316/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-988436",
	                "Source": "/var/lib/docker/volumes/newest-cni-988436/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-988436",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-988436",
	                "name.minikube.sigs.k8s.io": "newest-cni-988436",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5cd6c68d9a2f028bf5bc25659aa7bdb9ba1d8e0ee9dd6fd113157ce6b681cc15",
	            "SandboxKey": "/var/run/docker/netns/5cd6c68d9a2f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-988436": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:50:06:b6:1c:00",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a65d7e56c8a8636a38fe861ce7ce76450c77f0c819639a82d76a33b2e2e5cd5c",
	                    "EndpointID": "308468a5247a62739bdd39e93340f27c1ae37d0bfa958b4878dfc856800f0745",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-988436",
	                        "9b780de2752c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-988436 -n newest-cni-988436
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-988436 -n newest-cni-988436: exit status 2 (431.856044ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-988436 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-988436 logs -n 25: (1.343037267s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-314275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:54 UTC │ 06 Oct 25 19:55 UTC │
	│ addons  │ enable metrics-server -p embed-certs-830393 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:54 UTC │                     │
	│ stop    │ -p embed-certs-830393 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ addons  │ enable dashboard -p embed-certs-830393 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ pause   │ -p no-preload-314275 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │                     │
	│ start   │ -p embed-certs-830393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:56 UTC │
	│ delete  │ -p no-preload-314275                                                                                                                                                                                                                          │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ delete  │ -p no-preload-314275                                                                                                                                                                                                                          │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ delete  │ -p disable-driver-mounts-932453                                                                                                                                                                                                               │ disable-driver-mounts-932453 │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ start   │ -p default-k8s-diff-port-997276 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:56 UTC │
	│ image   │ embed-certs-830393 image list --format=json                                                                                                                                                                                                   │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │ 06 Oct 25 19:56 UTC │
	│ pause   │ -p embed-certs-830393 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │                     │
	│ delete  │ -p embed-certs-830393                                                                                                                                                                                                                         │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │ 06 Oct 25 19:56 UTC │
	│ delete  │ -p embed-certs-830393                                                                                                                                                                                                                         │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │ 06 Oct 25 19:56 UTC │
	│ start   │ -p newest-cni-988436 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │ 06 Oct 25 19:57 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-997276 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-997276 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:57 UTC │
	│ addons  │ enable metrics-server -p newest-cni-988436 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │                     │
	│ stop    │ -p newest-cni-988436 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:57 UTC │
	│ addons  │ enable dashboard -p newest-cni-988436 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:57 UTC │
	│ start   │ -p newest-cni-988436 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:57 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-997276 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:57 UTC │
	│ start   │ -p default-k8s-diff-port-997276 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │                     │
	│ image   │ newest-cni-988436 image list --format=json                                                                                                                                                                                                    │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:57 UTC │
	│ pause   │ -p newest-cni-988436 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 19:57:12
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 19:57:12.457316  214483 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:57:12.457487  214483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:57:12.457516  214483 out.go:374] Setting ErrFile to fd 2...
	I1006 19:57:12.457536  214483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:57:12.457790  214483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:57:12.458182  214483 out.go:368] Setting JSON to false
	I1006 19:57:12.459059  214483 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5968,"bootTime":1759774665,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1006 19:57:12.459161  214483 start.go:140] virtualization:  
	I1006 19:57:12.463762  214483 out.go:179] * [default-k8s-diff-port-997276] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 19:57:12.467025  214483 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 19:57:12.467110  214483 notify.go:220] Checking for updates...
	I1006 19:57:12.472895  214483 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 19:57:12.475808  214483 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:57:12.478821  214483 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	I1006 19:57:12.481722  214483 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 19:57:12.484688  214483 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 19:57:12.488072  214483 config.go:182] Loaded profile config "default-k8s-diff-port-997276": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:57:12.488681  214483 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 19:57:12.513377  214483 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 19:57:12.513497  214483 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:57:12.573907  214483 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-06 19:57:12.564543979 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:57:12.574017  214483 docker.go:318] overlay module found
	I1006 19:57:12.577323  214483 out.go:179] * Using the docker driver based on existing profile
	I1006 19:57:12.580126  214483 start.go:304] selected driver: docker
	I1006 19:57:12.580146  214483 start.go:924] validating driver "docker" against &{Name:default-k8s-diff-port-997276 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-997276 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:57:12.580248  214483 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 19:57:12.580957  214483 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:57:12.648083  214483 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-06 19:57:12.638983786 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:57:12.648458  214483 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 19:57:12.648495  214483 cni.go:84] Creating CNI manager for ""
	I1006 19:57:12.648556  214483 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:57:12.648625  214483 start.go:348] cluster config:
	{Name:default-k8s-diff-port-997276 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-997276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:57:12.653680  214483 out.go:179] * Starting "default-k8s-diff-port-997276" primary control-plane node in "default-k8s-diff-port-997276" cluster
	I1006 19:57:12.656515  214483 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 19:57:12.659398  214483 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 19:57:12.663029  214483 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:57:12.663095  214483 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1006 19:57:12.663105  214483 cache.go:58] Caching tarball of preloaded images
	I1006 19:57:12.663158  214483 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 19:57:12.663196  214483 preload.go:233] Found /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1006 19:57:12.663206  214483 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 19:57:12.663326  214483 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/config.json ...
	I1006 19:57:12.682510  214483 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 19:57:12.682539  214483 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 19:57:12.682585  214483 cache.go:232] Successfully downloaded all kic artifacts
	I1006 19:57:12.682612  214483 start.go:360] acquireMachinesLock for default-k8s-diff-port-997276: {Name:mk7b25a356bfff93cc3ef03a69dea8b7e852b578 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 19:57:12.682671  214483 start.go:364] duration metric: took 37.482µs to acquireMachinesLock for "default-k8s-diff-port-997276"
	I1006 19:57:12.682696  214483 start.go:96] Skipping create...Using existing machine configuration
	I1006 19:57:12.682712  214483 fix.go:54] fixHost starting: 
	I1006 19:57:12.682991  214483 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997276 --format={{.State.Status}}
	I1006 19:57:12.699687  214483 fix.go:112] recreateIfNeeded on default-k8s-diff-port-997276: state=Stopped err=<nil>
	W1006 19:57:12.699754  214483 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 19:57:11.100406  213865 out.go:252] * Restarting existing docker container for "newest-cni-988436" ...
	I1006 19:57:11.100509  213865 cli_runner.go:164] Run: docker start newest-cni-988436
	I1006 19:57:11.358870  213865 cli_runner.go:164] Run: docker container inspect newest-cni-988436 --format={{.State.Status}}
	I1006 19:57:11.379256  213865 kic.go:430] container "newest-cni-988436" state is running.
	I1006 19:57:11.379760  213865 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-988436
	I1006 19:57:11.408631  213865 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/config.json ...
	I1006 19:57:11.408927  213865 machine.go:93] provisionDockerMachine start ...
	I1006 19:57:11.409009  213865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:57:11.439872  213865 main.go:141] libmachine: Using SSH client type: native
	I1006 19:57:11.440609  213865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33090 <nil> <nil>}
	I1006 19:57:11.440634  213865 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 19:57:11.441967  213865 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1006 19:57:14.579498  213865 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-988436
	
	I1006 19:57:14.579527  213865 ubuntu.go:182] provisioning hostname "newest-cni-988436"
	I1006 19:57:14.579590  213865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:57:14.598393  213865 main.go:141] libmachine: Using SSH client type: native
	I1006 19:57:14.598704  213865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33090 <nil> <nil>}
	I1006 19:57:14.598716  213865 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-988436 && echo "newest-cni-988436" | sudo tee /etc/hostname
	I1006 19:57:14.741265  213865 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-988436
	
	I1006 19:57:14.741340  213865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:57:14.759862  213865 main.go:141] libmachine: Using SSH client type: native
	I1006 19:57:14.760163  213865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33090 <nil> <nil>}
	I1006 19:57:14.760180  213865 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-988436' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-988436/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-988436' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 19:57:14.892059  213865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 19:57:14.892084  213865 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-2540/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-2540/.minikube}
	I1006 19:57:14.892120  213865 ubuntu.go:190] setting up certificates
	I1006 19:57:14.892133  213865 provision.go:84] configureAuth start
	I1006 19:57:14.892198  213865 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-988436
	I1006 19:57:14.910422  213865 provision.go:143] copyHostCerts
	I1006 19:57:14.910490  213865 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem, removing ...
	I1006 19:57:14.910508  213865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem
	I1006 19:57:14.910586  213865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem (1123 bytes)
	I1006 19:57:14.910682  213865 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem, removing ...
	I1006 19:57:14.910688  213865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem
	I1006 19:57:14.910713  213865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem (1675 bytes)
	I1006 19:57:14.910762  213865 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem, removing ...
	I1006 19:57:14.910767  213865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem
	I1006 19:57:14.910788  213865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem (1082 bytes)
	I1006 19:57:14.910869  213865 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem org=jenkins.newest-cni-988436 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-988436]
	I1006 19:57:15.410126  213865 provision.go:177] copyRemoteCerts
	I1006 19:57:15.410209  213865 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 19:57:15.410249  213865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:57:15.427357  213865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa Username:docker}
	I1006 19:57:15.523744  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 19:57:15.542092  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 19:57:15.559935  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1006 19:57:15.580163  213865 provision.go:87] duration metric: took 688.005611ms to configureAuth
	I1006 19:57:15.580195  213865 ubuntu.go:206] setting minikube options for container-runtime
	I1006 19:57:15.580433  213865 config.go:182] Loaded profile config "newest-cni-988436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:57:15.580555  213865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:57:15.598176  213865 main.go:141] libmachine: Using SSH client type: native
	I1006 19:57:15.598490  213865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33090 <nil> <nil>}
	I1006 19:57:15.598508  213865 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 19:57:12.703239  214483 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-997276" ...
	I1006 19:57:12.703321  214483 cli_runner.go:164] Run: docker start default-k8s-diff-port-997276
	I1006 19:57:13.041404  214483 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997276 --format={{.State.Status}}
	I1006 19:57:13.063462  214483 kic.go:430] container "default-k8s-diff-port-997276" state is running.
	I1006 19:57:13.063919  214483 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-997276
	I1006 19:57:13.086800  214483 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/config.json ...
	I1006 19:57:13.087044  214483 machine.go:93] provisionDockerMachine start ...
	I1006 19:57:13.087123  214483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:57:13.107124  214483 main.go:141] libmachine: Using SSH client type: native
	I1006 19:57:13.107438  214483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33095 <nil> <nil>}
	I1006 19:57:13.107455  214483 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 19:57:13.108941  214483 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1006 19:57:16.271499  214483 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-997276
	
	I1006 19:57:16.271522  214483 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-997276"
	I1006 19:57:16.271590  214483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:57:16.293026  214483 main.go:141] libmachine: Using SSH client type: native
	I1006 19:57:16.293362  214483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33095 <nil> <nil>}
	I1006 19:57:16.293375  214483 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-997276 && echo "default-k8s-diff-port-997276" | sudo tee /etc/hostname
	I1006 19:57:16.468233  214483 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-997276
	
	I1006 19:57:16.468308  214483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:57:16.501085  214483 main.go:141] libmachine: Using SSH client type: native
	I1006 19:57:16.501384  214483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33095 <nil> <nil>}
	I1006 19:57:16.501408  214483 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-997276' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-997276/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-997276' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 19:57:16.660487  214483 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 19:57:16.660517  214483 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-2540/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-2540/.minikube}
	I1006 19:57:16.660551  214483 ubuntu.go:190] setting up certificates
	I1006 19:57:16.660566  214483 provision.go:84] configureAuth start
	I1006 19:57:16.660631  214483 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-997276
	I1006 19:57:16.684354  214483 provision.go:143] copyHostCerts
	I1006 19:57:16.684433  214483 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem, removing ...
	I1006 19:57:16.684453  214483 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem
	I1006 19:57:16.684521  214483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem (1082 bytes)
	I1006 19:57:16.684619  214483 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem, removing ...
	I1006 19:57:16.684628  214483 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem
	I1006 19:57:16.684652  214483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem (1123 bytes)
	I1006 19:57:16.684707  214483 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem, removing ...
	I1006 19:57:16.684715  214483 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem
	I1006 19:57:16.684735  214483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem (1675 bytes)
	I1006 19:57:16.684787  214483 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-997276 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-997276 localhost minikube]
	I1006 19:57:17.091215  214483 provision.go:177] copyRemoteCerts
	I1006 19:57:17.091338  214483 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 19:57:17.091427  214483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:57:17.111346  214483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:57:17.213923  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 19:57:17.241686  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1006 19:57:17.265027  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 19:57:17.288261  214483 provision.go:87] duration metric: took 627.667072ms to configureAuth
	I1006 19:57:17.288283  214483 ubuntu.go:206] setting minikube options for container-runtime
	I1006 19:57:17.288487  214483 config.go:182] Loaded profile config "default-k8s-diff-port-997276": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:57:17.288594  214483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:57:17.329369  214483 main.go:141] libmachine: Using SSH client type: native
	I1006 19:57:17.329671  214483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33095 <nil> <nil>}
	I1006 19:57:17.329686  214483 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 19:57:15.871995  213865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 19:57:15.872018  213865 machine.go:96] duration metric: took 4.463081627s to provisionDockerMachine
	I1006 19:57:15.872030  213865 start.go:293] postStartSetup for "newest-cni-988436" (driver="docker")
	I1006 19:57:15.872040  213865 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 19:57:15.872107  213865 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 19:57:15.872147  213865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:57:15.892883  213865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa Username:docker}
	I1006 19:57:15.988128  213865 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 19:57:15.991880  213865 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 19:57:15.991912  213865 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 19:57:15.991941  213865 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/addons for local assets ...
	I1006 19:57:15.992005  213865 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/files for local assets ...
	I1006 19:57:15.992136  213865 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> 43502.pem in /etc/ssl/certs
	I1006 19:57:15.992245  213865 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 19:57:16.000130  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:57:16.018910  213865 start.go:296] duration metric: took 146.863291ms for postStartSetup
	I1006 19:57:16.019003  213865 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 19:57:16.019097  213865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:57:16.038016  213865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa Username:docker}
	I1006 19:57:16.137178  213865 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 19:57:16.147755  213865 fix.go:56] duration metric: took 5.067925969s for fixHost
	I1006 19:57:16.147830  213865 start.go:83] releasing machines lock for "newest-cni-988436", held for 5.068035739s
	I1006 19:57:16.147934  213865 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-988436
	I1006 19:57:16.169786  213865 ssh_runner.go:195] Run: cat /version.json
	I1006 19:57:16.169836  213865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:57:16.170069  213865 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 19:57:16.170127  213865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:57:16.191058  213865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa Username:docker}
	I1006 19:57:16.212989  213865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa Username:docker}
	I1006 19:57:16.287839  213865 ssh_runner.go:195] Run: systemctl --version
	I1006 19:57:16.386800  213865 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 19:57:16.427025  213865 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 19:57:16.431371  213865 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 19:57:16.431446  213865 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 19:57:16.440403  213865 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 19:57:16.440427  213865 start.go:495] detecting cgroup driver to use...
	I1006 19:57:16.440481  213865 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 19:57:16.440556  213865 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 19:57:16.457511  213865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 19:57:16.477175  213865 docker.go:218] disabling cri-docker service (if available) ...
	I1006 19:57:16.477239  213865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 19:57:16.496724  213865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 19:57:16.520234  213865 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 19:57:16.663112  213865 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 19:57:16.809948  213865 docker.go:234] disabling docker service ...
	I1006 19:57:16.810014  213865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 19:57:16.827479  213865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 19:57:16.841452  213865 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 19:57:16.975220  213865 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 19:57:17.140694  213865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 19:57:17.154904  213865 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 19:57:17.172592  213865 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 19:57:17.172665  213865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:17.181857  213865 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 19:57:17.181925  213865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:17.190804  213865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:17.199453  213865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:17.209166  213865 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 19:57:17.217629  213865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:17.228382  213865 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:17.237991  213865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:17.248893  213865 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 19:57:17.256749  213865 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 19:57:17.264603  213865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:57:17.413673  213865 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 19:57:17.568098  213865 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 19:57:17.568169  213865 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 19:57:17.574094  213865 start.go:563] Will wait 60s for crictl version
	I1006 19:57:17.574156  213865 ssh_runner.go:195] Run: which crictl
	I1006 19:57:17.578718  213865 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 19:57:17.625344  213865 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 19:57:17.625433  213865 ssh_runner.go:195] Run: crio --version
	I1006 19:57:17.655488  213865 ssh_runner.go:195] Run: crio --version
	I1006 19:57:17.695949  213865 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 19:57:17.699214  213865 cli_runner.go:164] Run: docker network inspect newest-cni-988436 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:57:17.725324  213865 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1006 19:57:17.731896  213865 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:57:17.746102  213865 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1006 19:57:17.749054  213865 kubeadm.go:883] updating cluster {Name:newest-cni-988436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-988436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 19:57:17.749200  213865 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:57:17.749279  213865 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:57:17.788012  213865 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:57:17.788038  213865 crio.go:433] Images already preloaded, skipping extraction
	I1006 19:57:17.788095  213865 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:57:17.814676  213865 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:57:17.814704  213865 cache_images.go:85] Images are preloaded, skipping loading
	I1006 19:57:17.814712  213865 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1006 19:57:17.814873  213865 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-988436 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-988436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 19:57:17.814982  213865 ssh_runner.go:195] Run: crio config
	I1006 19:57:17.904305  213865 cni.go:84] Creating CNI manager for ""
	I1006 19:57:17.904345  213865 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:57:17.904396  213865 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1006 19:57:17.904434  213865 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-988436 NodeName:newest-cni-988436 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 19:57:17.904582  213865 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-988436"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 19:57:17.904670  213865 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 19:57:17.927124  213865 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 19:57:17.927198  213865 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 19:57:17.935482  213865 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1006 19:57:17.950796  213865 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 19:57:17.966065  213865 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1006 19:57:17.980969  213865 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1006 19:57:17.984926  213865 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:57:17.995076  213865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:57:18.183336  213865 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:57:18.201198  213865 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436 for IP: 192.168.85.2
	I1006 19:57:18.201222  213865 certs.go:195] generating shared ca certs ...
	I1006 19:57:18.201238  213865 certs.go:227] acquiring lock for ca certs: {Name:mke29bef1b13829052576090d66d8864f7cbc64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:57:18.201375  213865 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key
	I1006 19:57:18.201431  213865 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key
	I1006 19:57:18.201442  213865 certs.go:257] generating profile certs ...
	I1006 19:57:18.201526  213865 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/client.key
	I1006 19:57:18.201595  213865 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.key.1ee6693d
	I1006 19:57:18.201637  213865 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/proxy-client.key
	I1006 19:57:18.201744  213865 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem (1338 bytes)
	W1006 19:57:18.201777  213865 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350_empty.pem, impossibly tiny 0 bytes
	I1006 19:57:18.201793  213865 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 19:57:18.201819  213865 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem (1082 bytes)
	I1006 19:57:18.201850  213865 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem (1123 bytes)
	I1006 19:57:18.201874  213865 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem (1675 bytes)
	I1006 19:57:18.201920  213865 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:57:18.202518  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 19:57:18.248587  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 19:57:18.284398  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 19:57:18.307310  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 19:57:18.334017  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 19:57:18.364395  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 19:57:18.411321  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 19:57:18.444657  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 19:57:18.501894  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem --> /usr/share/ca-certificates/4350.pem (1338 bytes)
	I1006 19:57:18.544574  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /usr/share/ca-certificates/43502.pem (1708 bytes)
	I1006 19:57:18.572382  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 19:57:18.595247  213865 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 19:57:18.611789  213865 ssh_runner.go:195] Run: openssl version
	I1006 19:57:18.620090  213865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4350.pem && ln -fs /usr/share/ca-certificates/4350.pem /etc/ssl/certs/4350.pem"
	I1006 19:57:18.629422  213865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4350.pem
	I1006 19:57:18.633621  213865 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 18:49 /usr/share/ca-certificates/4350.pem
	I1006 19:57:18.633748  213865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4350.pem
	I1006 19:57:18.678305  213865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4350.pem /etc/ssl/certs/51391683.0"
	I1006 19:57:18.686586  213865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43502.pem && ln -fs /usr/share/ca-certificates/43502.pem /etc/ssl/certs/43502.pem"
	I1006 19:57:18.700089  213865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43502.pem
	I1006 19:57:18.704666  213865 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 18:49 /usr/share/ca-certificates/43502.pem
	I1006 19:57:18.704734  213865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43502.pem
	I1006 19:57:18.753665  213865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43502.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 19:57:18.761701  213865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 19:57:18.773022  213865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:57:18.777434  213865 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:57:18.777499  213865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:57:18.831033  213865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 19:57:18.839018  213865 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 19:57:18.843339  213865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 19:57:18.888578  213865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 19:57:18.936741  213865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 19:57:18.990548  213865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 19:57:19.066275  213865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 19:57:19.140783  213865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 19:57:19.226408  213865 kubeadm.go:400] StartCluster: {Name:newest-cni-988436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-988436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:57:19.226501  213865 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 19:57:19.226576  213865 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 19:57:19.320520  213865 cri.go:89] found id: "ef74cfcf93f82ed60cc7dc8b6e3da6bd2c5e5437877a94ad1f08dfe2f858aa0c"
	I1006 19:57:19.320544  213865 cri.go:89] found id: ""
	I1006 19:57:19.320596  213865 ssh_runner.go:195] Run: sudo runc list -f json
	W1006 19:57:19.337590  213865 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:57:19Z" level=error msg="open /run/runc: no such file or directory"
	I1006 19:57:19.337689  213865 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 19:57:19.357304  213865 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 19:57:19.357334  213865 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 19:57:19.357387  213865 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 19:57:19.400885  213865 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 19:57:19.401293  213865 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-988436" does not appear in /home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:57:19.401396  213865 kubeconfig.go:62] /home/jenkins/minikube-integration/21701-2540/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-988436" cluster setting kubeconfig missing "newest-cni-988436" context setting]
	I1006 19:57:19.401697  213865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/kubeconfig: {Name:mkf8cce8f9dace5c4d41967d6ad8df5e03ba53f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:57:19.402904  213865 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 19:57:19.419742  213865 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1006 19:57:19.419829  213865 kubeadm.go:601] duration metric: took 62.487145ms to restartPrimaryControlPlane
	I1006 19:57:19.419853  213865 kubeadm.go:402] duration metric: took 193.454555ms to StartCluster
	I1006 19:57:19.419900  213865 settings.go:142] acquiring lock: {Name:mkaade96c66593c653256adaeb0a3029ca60e0c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:57:19.420097  213865 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:57:19.420792  213865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/kubeconfig: {Name:mkf8cce8f9dace5c4d41967d6ad8df5e03ba53f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:57:19.421118  213865 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 19:57:19.421356  213865 config.go:182] Loaded profile config "newest-cni-988436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:57:19.421370  213865 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 19:57:19.421781  213865 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-988436"
	I1006 19:57:19.421796  213865 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-988436"
	W1006 19:57:19.421803  213865 addons.go:247] addon storage-provisioner should already be in state true
	I1006 19:57:19.421826  213865 host.go:66] Checking if "newest-cni-988436" exists ...
	I1006 19:57:19.421836  213865 addons.go:69] Setting dashboard=true in profile "newest-cni-988436"
	I1006 19:57:19.421853  213865 addons.go:238] Setting addon dashboard=true in "newest-cni-988436"
	W1006 19:57:19.421871  213865 addons.go:247] addon dashboard should already be in state true
	I1006 19:57:19.421895  213865 host.go:66] Checking if "newest-cni-988436" exists ...
	I1006 19:57:19.422277  213865 cli_runner.go:164] Run: docker container inspect newest-cni-988436 --format={{.State.Status}}
	I1006 19:57:19.422394  213865 cli_runner.go:164] Run: docker container inspect newest-cni-988436 --format={{.State.Status}}
	I1006 19:57:19.422765  213865 addons.go:69] Setting default-storageclass=true in profile "newest-cni-988436"
	I1006 19:57:19.422789  213865 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-988436"
	I1006 19:57:19.423069  213865 cli_runner.go:164] Run: docker container inspect newest-cni-988436 --format={{.State.Status}}
	I1006 19:57:19.425885  213865 out.go:179] * Verifying Kubernetes components...
	I1006 19:57:19.431961  213865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:57:19.481842  213865 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 19:57:19.481897  213865 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1006 19:57:19.484808  213865 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 19:57:19.484833  213865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 19:57:19.484902  213865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:57:19.488919  213865 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1006 19:57:17.694447  214483 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 19:57:17.694478  214483 machine.go:96] duration metric: took 4.607416378s to provisionDockerMachine
	I1006 19:57:17.694506  214483 start.go:293] postStartSetup for "default-k8s-diff-port-997276" (driver="docker")
	I1006 19:57:17.694523  214483 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 19:57:17.694678  214483 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 19:57:17.694750  214483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:57:17.728502  214483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:57:17.838424  214483 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 19:57:17.842688  214483 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 19:57:17.842715  214483 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 19:57:17.842725  214483 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/addons for local assets ...
	I1006 19:57:17.842783  214483 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/files for local assets ...
	I1006 19:57:17.842880  214483 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> 43502.pem in /etc/ssl/certs
	I1006 19:57:17.842986  214483 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 19:57:17.851382  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:57:17.871148  214483 start.go:296] duration metric: took 176.620175ms for postStartSetup
	I1006 19:57:17.871274  214483 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 19:57:17.871359  214483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:57:17.903012  214483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:57:18.011889  214483 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 19:57:18.027883  214483 fix.go:56] duration metric: took 5.345167136s for fixHost
	I1006 19:57:18.027909  214483 start.go:83] releasing machines lock for "default-k8s-diff-port-997276", held for 5.345223318s
	I1006 19:57:18.027986  214483 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-997276
	I1006 19:57:18.068131  214483 ssh_runner.go:195] Run: cat /version.json
	I1006 19:57:18.068191  214483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:57:18.068211  214483 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 19:57:18.068272  214483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:57:18.099828  214483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:57:18.116183  214483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:57:18.212328  214483 ssh_runner.go:195] Run: systemctl --version
	I1006 19:57:18.322713  214483 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 19:57:18.385651  214483 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 19:57:18.391829  214483 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 19:57:18.391987  214483 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 19:57:18.404019  214483 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 19:57:18.404043  214483 start.go:495] detecting cgroup driver to use...
	I1006 19:57:18.404076  214483 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 19:57:18.404123  214483 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 19:57:18.424988  214483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 19:57:18.446991  214483 docker.go:218] disabling cri-docker service (if available) ...
	I1006 19:57:18.447068  214483 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 19:57:18.472701  214483 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 19:57:18.489522  214483 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 19:57:18.692900  214483 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 19:57:18.858664  214483 docker.go:234] disabling docker service ...
	I1006 19:57:18.858727  214483 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 19:57:18.875984  214483 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 19:57:18.890542  214483 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 19:57:19.051290  214483 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 19:57:19.260867  214483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 19:57:19.280181  214483 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 19:57:19.302063  214483 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 19:57:19.302197  214483 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:19.326820  214483 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 19:57:19.326900  214483 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:19.340732  214483 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:19.356307  214483 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:19.369072  214483 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 19:57:19.378077  214483 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:19.394503  214483 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:19.412997  214483 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:19.447857  214483 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 19:57:19.492047  214483 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 19:57:19.520732  214483 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:57:19.773084  214483 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 19:57:19.983369  214483 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 19:57:19.983431  214483 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 19:57:19.989013  214483 start.go:563] Will wait 60s for crictl version
	I1006 19:57:19.989106  214483 ssh_runner.go:195] Run: which crictl
	I1006 19:57:19.995066  214483 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 19:57:20.049210  214483 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 19:57:20.049295  214483 ssh_runner.go:195] Run: crio --version
	I1006 19:57:20.109913  214483 ssh_runner.go:195] Run: crio --version
	I1006 19:57:20.162999  214483 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 19:57:19.496197  213865 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1006 19:57:19.496218  213865 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1006 19:57:19.496291  213865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:57:19.497183  213865 addons.go:238] Setting addon default-storageclass=true in "newest-cni-988436"
	W1006 19:57:19.497205  213865 addons.go:247] addon default-storageclass should already be in state true
	I1006 19:57:19.497232  213865 host.go:66] Checking if "newest-cni-988436" exists ...
	I1006 19:57:19.497654  213865 cli_runner.go:164] Run: docker container inspect newest-cni-988436 --format={{.State.Status}}
	I1006 19:57:19.547945  213865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa Username:docker}
	I1006 19:57:19.553117  213865 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 19:57:19.553137  213865 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 19:57:19.553199  213865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:57:19.562006  213865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa Username:docker}
	I1006 19:57:19.589630  213865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa Username:docker}
	I1006 19:57:19.908554  213865 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1006 19:57:19.908580  213865 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1006 19:57:19.943179  213865 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:57:19.978739  213865 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1006 19:57:19.978826  213865 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1006 19:57:19.980118  213865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 19:57:20.014626  213865 api_server.go:52] waiting for apiserver process to appear ...
	I1006 19:57:20.014729  213865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 19:57:20.076314  213865 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1006 19:57:20.076342  213865 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1006 19:57:20.079777  213865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 19:57:20.196271  213865 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1006 19:57:20.196297  213865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1006 19:57:20.290794  213865 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1006 19:57:20.290817  213865 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1006 19:57:20.363499  213865 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1006 19:57:20.363525  213865 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1006 19:57:20.405880  213865 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1006 19:57:20.405907  213865 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1006 19:57:20.425331  213865 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1006 19:57:20.425357  213865 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1006 19:57:20.461297  213865 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1006 19:57:20.461323  213865 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1006 19:57:20.499420  213865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1006 19:57:20.166049  214483 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-997276 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:57:20.198340  214483 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1006 19:57:20.203043  214483 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:57:20.220911  214483 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-997276 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-997276 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 19:57:20.221031  214483 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:57:20.221087  214483 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:57:20.300882  214483 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:57:20.300908  214483 crio.go:433] Images already preloaded, skipping extraction
	I1006 19:57:20.300965  214483 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:57:20.351095  214483 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:57:20.351119  214483 cache_images.go:85] Images are preloaded, skipping loading
	I1006 19:57:20.351128  214483 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1006 19:57:20.351239  214483 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-997276 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-997276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 19:57:20.351328  214483 ssh_runner.go:195] Run: crio config
	I1006 19:57:20.463477  214483 cni.go:84] Creating CNI manager for ""
	I1006 19:57:20.463497  214483 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:57:20.463523  214483 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 19:57:20.463551  214483 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-997276 NodeName:default-k8s-diff-port-997276 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 19:57:20.463752  214483 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-997276"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 19:57:20.463839  214483 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 19:57:20.473040  214483 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 19:57:20.473158  214483 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 19:57:20.482196  214483 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1006 19:57:20.500594  214483 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 19:57:20.519950  214483 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1006 19:57:20.541495  214483 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1006 19:57:20.552400  214483 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:57:20.567976  214483 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:57:20.775832  214483 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:57:20.816179  214483 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276 for IP: 192.168.76.2
	I1006 19:57:20.816256  214483 certs.go:195] generating shared ca certs ...
	I1006 19:57:20.816289  214483 certs.go:227] acquiring lock for ca certs: {Name:mke29bef1b13829052576090d66d8864f7cbc64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:57:20.816489  214483 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key
	I1006 19:57:20.816566  214483 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key
	I1006 19:57:20.816601  214483 certs.go:257] generating profile certs ...
	I1006 19:57:20.816733  214483 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/client.key
	I1006 19:57:20.816820  214483 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.key.24aba503
	I1006 19:57:20.816890  214483 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/proxy-client.key
	I1006 19:57:20.817035  214483 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem (1338 bytes)
	W1006 19:57:20.817091  214483 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350_empty.pem, impossibly tiny 0 bytes
	I1006 19:57:20.817117  214483 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 19:57:20.817175  214483 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem (1082 bytes)
	I1006 19:57:20.817225  214483 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem (1123 bytes)
	I1006 19:57:20.817263  214483 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem (1675 bytes)
	I1006 19:57:20.817333  214483 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:57:20.817933  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 19:57:20.838075  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 19:57:20.872584  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 19:57:20.905243  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 19:57:20.936636  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1006 19:57:20.966840  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 19:57:21.007071  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 19:57:21.065101  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 19:57:21.117223  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /usr/share/ca-certificates/43502.pem (1708 bytes)
	I1006 19:57:21.176481  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 19:57:21.223765  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem --> /usr/share/ca-certificates/4350.pem (1338 bytes)
	I1006 19:57:21.280982  214483 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 19:57:21.312101  214483 ssh_runner.go:195] Run: openssl version
	I1006 19:57:21.319545  214483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 19:57:21.331611  214483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:57:21.344080  214483 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:57:21.344200  214483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:57:21.393852  214483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 19:57:21.402132  214483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4350.pem && ln -fs /usr/share/ca-certificates/4350.pem /etc/ssl/certs/4350.pem"
	I1006 19:57:21.414268  214483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4350.pem
	I1006 19:57:21.419537  214483 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 18:49 /usr/share/ca-certificates/4350.pem
	I1006 19:57:21.419621  214483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4350.pem
	I1006 19:57:21.462093  214483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4350.pem /etc/ssl/certs/51391683.0"
	I1006 19:57:21.476198  214483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43502.pem && ln -fs /usr/share/ca-certificates/43502.pem /etc/ssl/certs/43502.pem"
	I1006 19:57:21.485631  214483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43502.pem
	I1006 19:57:21.492344  214483 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 18:49 /usr/share/ca-certificates/43502.pem
	I1006 19:57:21.492472  214483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43502.pem
	I1006 19:57:21.540609  214483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43502.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 19:57:21.548639  214483 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 19:57:21.553095  214483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 19:57:21.597377  214483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 19:57:21.640850  214483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 19:57:21.752718  214483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 19:57:21.866368  214483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 19:57:22.021521  214483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 19:57:22.180891  214483 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-997276 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-997276 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:57:22.181007  214483 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 19:57:22.181092  214483 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 19:57:22.306699  214483 cri.go:89] found id: "69016b9f99d774f122c6a833714dc8aa9a9bed369630bb89cab483dc73bb2308"
	I1006 19:57:22.306733  214483 cri.go:89] found id: "d55af54186cd2219cd4bae48a2441c417921efdadcb8fc6384bac72028d350db"
	I1006 19:57:22.306738  214483 cri.go:89] found id: "e8361e8e1ee0632342d163d9478d496057250b3eeebfb0c28f6c1c0c8879ec24"
	I1006 19:57:22.306750  214483 cri.go:89] found id: "3ccfad32d3fc760df3c3a85997578af79aef13072ed1d94e610c623e692996c5"
	I1006 19:57:22.306754  214483 cri.go:89] found id: ""
	I1006 19:57:22.306814  214483 ssh_runner.go:195] Run: sudo runc list -f json
	W1006 19:57:22.349254  214483 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:57:22Z" level=error msg="open /run/runc: no such file or directory"
	I1006 19:57:22.349355  214483 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 19:57:22.368086  214483 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 19:57:22.368156  214483 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 19:57:22.368238  214483 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 19:57:22.385006  214483 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 19:57:22.385611  214483 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-997276" does not appear in /home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:57:22.385885  214483 kubeconfig.go:62] /home/jenkins/minikube-integration/21701-2540/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-997276" cluster setting kubeconfig missing "default-k8s-diff-port-997276" context setting]
	I1006 19:57:22.386441  214483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/kubeconfig: {Name:mkf8cce8f9dace5c4d41967d6ad8df5e03ba53f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:57:22.388197  214483 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 19:57:22.409168  214483 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1006 19:57:22.409203  214483 kubeadm.go:601] duration metric: took 41.029838ms to restartPrimaryControlPlane
	I1006 19:57:22.409212  214483 kubeadm.go:402] duration metric: took 228.331748ms to StartCluster
	I1006 19:57:22.409227  214483 settings.go:142] acquiring lock: {Name:mkaade96c66593c653256adaeb0a3029ca60e0c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:57:22.409301  214483 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:57:22.410280  214483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/kubeconfig: {Name:mkf8cce8f9dace5c4d41967d6ad8df5e03ba53f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:57:22.410523  214483 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 19:57:22.410925  214483 config.go:182] Loaded profile config "default-k8s-diff-port-997276": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:57:22.410886  214483 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 19:57:22.410974  214483 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-997276"
	I1006 19:57:22.410978  214483 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-997276"
	I1006 19:57:22.410989  214483 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-997276"
	I1006 19:57:22.410990  214483 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-997276"
	W1006 19:57:22.410997  214483 addons.go:247] addon storage-provisioner should already be in state true
	I1006 19:57:22.411016  214483 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-997276"
	I1006 19:57:22.411022  214483 host.go:66] Checking if "default-k8s-diff-port-997276" exists ...
	I1006 19:57:22.411028  214483 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-997276"
	I1006 19:57:22.411451  214483 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997276 --format={{.State.Status}}
	I1006 19:57:22.411460  214483 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997276 --format={{.State.Status}}
	W1006 19:57:22.410998  214483 addons.go:247] addon dashboard should already be in state true
	I1006 19:57:22.415692  214483 host.go:66] Checking if "default-k8s-diff-port-997276" exists ...
	I1006 19:57:22.416316  214483 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997276 --format={{.State.Status}}
	I1006 19:57:22.425752  214483 out.go:179] * Verifying Kubernetes components...
	I1006 19:57:22.435321  214483 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:57:22.468709  214483 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 19:57:22.469735  214483 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-997276"
	W1006 19:57:22.469752  214483 addons.go:247] addon default-storageclass should already be in state true
	I1006 19:57:22.469777  214483 host.go:66] Checking if "default-k8s-diff-port-997276" exists ...
	I1006 19:57:22.470208  214483 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997276 --format={{.State.Status}}
	I1006 19:57:22.476480  214483 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 19:57:22.476511  214483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 19:57:22.476582  214483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:57:22.492407  214483 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1006 19:57:22.501964  214483 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1006 19:57:22.504967  214483 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1006 19:57:22.504992  214483 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1006 19:57:22.505058  214483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:57:22.512375  214483 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 19:57:22.512410  214483 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 19:57:22.512475  214483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:57:22.535860  214483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:57:22.551596  214483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:57:22.561543  214483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:57:22.849689  214483 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:57:22.892747  214483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 19:57:22.940973  214483 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1006 19:57:22.940994  214483 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1006 19:57:22.994971  214483 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-997276" to be "Ready" ...
	I1006 19:57:23.014365  214483 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1006 19:57:23.014389  214483 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1006 19:57:23.115077  214483 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1006 19:57:23.115098  214483 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1006 19:57:23.115788  214483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 19:57:23.196246  214483 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1006 19:57:23.196270  214483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1006 19:57:23.328901  214483 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1006 19:57:23.328974  214483 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1006 19:57:23.411290  214483 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1006 19:57:23.411314  214483 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1006 19:57:23.523427  214483 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1006 19:57:23.523452  214483 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1006 19:57:23.568219  214483 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1006 19:57:23.568243  214483 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1006 19:57:23.606884  214483 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1006 19:57:23.606909  214483 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1006 19:57:23.643400  214483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1006 19:57:32.041277  213865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (12.06109049s)
	I1006 19:57:32.041348  213865 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (12.02659519s)
	I1006 19:57:32.041363  213865 api_server.go:72] duration metric: took 12.619912424s to wait for apiserver process to appear ...
	I1006 19:57:32.041374  213865 api_server.go:88] waiting for apiserver healthz status ...
	I1006 19:57:32.041392  213865 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1006 19:57:32.041711  213865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.961909137s)
	I1006 19:57:32.093122  213865 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1006 19:57:32.109309  213865 api_server.go:141] control plane version: v1.34.1
	I1006 19:57:32.109341  213865 api_server.go:131] duration metric: took 67.960062ms to wait for apiserver health ...
	I1006 19:57:32.109351  213865 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 19:57:32.126413  213865 system_pods.go:59] 8 kube-system pods found
	I1006 19:57:32.126450  213865 system_pods.go:61] "coredns-66bc5c9577-z6drc" [4f782721-b2ed-4a40-9181-d83ac1315d08] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1006 19:57:32.126460  213865 system_pods.go:61] "etcd-newest-cni-988436" [b27477b1-584a-48dd-964f-383c3f41e66f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 19:57:32.126466  213865 system_pods.go:61] "kindnet-v4krt" [8b2c3ef8-c3bb-4e24-a72a-5a696590f257] Running
	I1006 19:57:32.126474  213865 system_pods.go:61] "kube-apiserver-newest-cni-988436" [21da4d62-1ceb-4988-a95f-d00aeed96f63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 19:57:32.126480  213865 system_pods.go:61] "kube-controller-manager-newest-cni-988436" [b07be9c3-cd1b-4026-8c91-76ab67ef61df] Running
	I1006 19:57:32.126490  213865 system_pods.go:61] "kube-proxy-wsgmd" [b2289712-8aa7-4ef1-909f-02322c74d8ee] Running
	I1006 19:57:32.126497  213865 system_pods.go:61] "kube-scheduler-newest-cni-988436" [e8d89cea-a1d3-4c7b-ac59-50ea8df07dcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 19:57:32.126505  213865 system_pods.go:61] "storage-provisioner" [6120daa3-8711-44b0-8951-f629755eb03c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1006 19:57:32.126512  213865 system_pods.go:74] duration metric: took 17.15569ms to wait for pod list to return data ...
	I1006 19:57:32.126525  213865 default_sa.go:34] waiting for default service account to be created ...
	I1006 19:57:32.149057  213865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (11.64959263s)
	I1006 19:57:32.152464  213865 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-988436 addons enable metrics-server
	
	I1006 19:57:32.152968  213865 default_sa.go:45] found service account: "default"
	I1006 19:57:32.153018  213865 default_sa.go:55] duration metric: took 26.48621ms for default service account to be created ...
	I1006 19:57:32.153045  213865 kubeadm.go:586] duration metric: took 12.731592032s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1006 19:57:32.153091  213865 node_conditions.go:102] verifying NodePressure condition ...
	I1006 19:57:32.159427  213865 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1006 19:57:32.162497  213865 addons.go:514] duration metric: took 12.741122933s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1006 19:57:32.163075  213865 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 19:57:32.163098  213865 node_conditions.go:123] node cpu capacity is 2
	I1006 19:57:32.163110  213865 node_conditions.go:105] duration metric: took 9.998766ms to run NodePressure ...
	I1006 19:57:32.163123  213865 start.go:241] waiting for startup goroutines ...
	I1006 19:57:32.163130  213865 start.go:246] waiting for cluster config update ...
	I1006 19:57:32.163142  213865 start.go:255] writing updated cluster config ...
	I1006 19:57:32.163437  213865 ssh_runner.go:195] Run: rm -f paused
	I1006 19:57:32.255597  213865 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1006 19:57:32.259239  213865 out.go:179] * Done! kubectl is now configured to use "newest-cni-988436" cluster and "default" namespace by default
	I1006 19:57:29.893107  214483 node_ready.go:49] node "default-k8s-diff-port-997276" is "Ready"
	I1006 19:57:29.893140  214483 node_ready.go:38] duration metric: took 6.898129445s for node "default-k8s-diff-port-997276" to be "Ready" ...
	I1006 19:57:29.893155  214483 api_server.go:52] waiting for apiserver process to appear ...
	I1006 19:57:29.893213  214483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 19:57:33.843830  214483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.951040231s)
	I1006 19:57:33.843887  214483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.728078847s)
	I1006 19:57:33.844135  214483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.200682947s)
	I1006 19:57:33.844278  214483 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.951048593s)
	I1006 19:57:33.844299  214483 api_server.go:72] duration metric: took 11.433738908s to wait for apiserver process to appear ...
	I1006 19:57:33.844305  214483 api_server.go:88] waiting for apiserver healthz status ...
	I1006 19:57:33.844323  214483 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1006 19:57:33.846950  214483 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-997276 addons enable metrics-server
	
	I1006 19:57:33.867502  214483 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1006 19:57:33.877208  214483 api_server.go:141] control plane version: v1.34.1
	I1006 19:57:33.877240  214483 api_server.go:131] duration metric: took 32.928865ms to wait for apiserver health ...
	I1006 19:57:33.877249  214483 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 19:57:33.889547  214483 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	
	
	==> CRI-O <==
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.706160378Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.720471429Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=5fe7dd2a-b2ec-4195-9221-40b78db90652 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.742261937Z" level=info msg="Ran pod sandbox 45d1d8eb3837633e6bbf978816e1006e6520694fbd8fd801b6229b166d19aa3b with infra container: kube-system/kindnet-v4krt/POD" id=5fe7dd2a-b2ec-4195-9221-40b78db90652 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.783985393Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-wsgmd/POD" id=83159863-0b44-4547-96f3-d719f5596566 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.784044053Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.795307359Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=83159863-0b44-4547-96f3-d719f5596566 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.811028725Z" level=info msg="Ran pod sandbox 3e9b11bc6a5ba5294cb27f8bb6b8d42d84a1640d5b5093b093b408af354b6145 with infra container: kube-system/kube-proxy-wsgmd/POD" id=83159863-0b44-4547-96f3-d719f5596566 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.811868954Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=acfaac53-02e4-47ad-bf1f-aed72cbc5d87 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.854051569Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=8dad303a-aea2-4355-b6f3-d9b911dfc312 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.864193435Z" level=info msg="Creating container: kube-system/kindnet-v4krt/kindnet-cni" id=0d6d1a5b-14e2-4135-9eec-f2520365b4c7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.864552835Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.866445776Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=8e59c6f6-3006-4d34-9d3b-fb4ea16b7cc1 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.882239792Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=4b3b15a3-286f-44e9-acb5-e38fdaa389d1 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.883965436Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.884728265Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.89537368Z" level=info msg="Creating container: kube-system/kube-proxy-wsgmd/kube-proxy" id=0d197ae7-e448-4be3-9ba4-31d5d61301e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.895910568Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.940813341Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.941735819Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:57:30 newest-cni-988436 crio[615]: time="2025-10-06T19:57:30.095120544Z" level=info msg="Created container 5dfc72bf385ca72f9ef0e7c8285dbd5e03723e68c0e33f9d091cd08c682f638b: kube-system/kindnet-v4krt/kindnet-cni" id=0d6d1a5b-14e2-4135-9eec-f2520365b4c7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:57:30 newest-cni-988436 crio[615]: time="2025-10-06T19:57:30.103266853Z" level=info msg="Starting container: 5dfc72bf385ca72f9ef0e7c8285dbd5e03723e68c0e33f9d091cd08c682f638b" id=3b524d9b-fc91-4fb9-990b-fc4e4a448308 name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 19:57:30 newest-cni-988436 crio[615]: time="2025-10-06T19:57:30.11042066Z" level=info msg="Started container" PID=1065 containerID=5dfc72bf385ca72f9ef0e7c8285dbd5e03723e68c0e33f9d091cd08c682f638b description=kube-system/kindnet-v4krt/kindnet-cni id=3b524d9b-fc91-4fb9-990b-fc4e4a448308 name=/runtime.v1.RuntimeService/StartContainer sandboxID=45d1d8eb3837633e6bbf978816e1006e6520694fbd8fd801b6229b166d19aa3b
	Oct 06 19:57:30 newest-cni-988436 crio[615]: time="2025-10-06T19:57:30.249821298Z" level=info msg="Created container 00e8681df5f941509c743b69461398538f9a51c14d6c19d45910d346d99d8d58: kube-system/kube-proxy-wsgmd/kube-proxy" id=0d197ae7-e448-4be3-9ba4-31d5d61301e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:57:30 newest-cni-988436 crio[615]: time="2025-10-06T19:57:30.251290322Z" level=info msg="Starting container: 00e8681df5f941509c743b69461398538f9a51c14d6c19d45910d346d99d8d58" id=31f2eacb-8a31-4dd7-a038-649b85a5cdfc name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 19:57:30 newest-cni-988436 crio[615]: time="2025-10-06T19:57:30.260027469Z" level=info msg="Started container" PID=1066 containerID=00e8681df5f941509c743b69461398538f9a51c14d6c19d45910d346d99d8d58 description=kube-system/kube-proxy-wsgmd/kube-proxy id=31f2eacb-8a31-4dd7-a038-649b85a5cdfc name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e9b11bc6a5ba5294cb27f8bb6b8d42d84a1640d5b5093b093b408af354b6145
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	00e8681df5f94       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   6 seconds ago       Running             kube-proxy                1                   3e9b11bc6a5ba       kube-proxy-wsgmd                            kube-system
	5dfc72bf385ca       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   6 seconds ago       Running             kindnet-cni               1                   45d1d8eb38376       kindnet-v4krt                               kube-system
	f34d65961eeb0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   17 seconds ago      Running             kube-apiserver            1                   9a4a7e572144e       kube-apiserver-newest-cni-988436            kube-system
	134549c8cebf1       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   17 seconds ago      Running             kube-scheduler            1                   f7eb990a0afe7       kube-scheduler-newest-cni-988436            kube-system
	edfddc0e3f8d1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   17 seconds ago      Running             kube-controller-manager   1                   a3c2aca9c783d       kube-controller-manager-newest-cni-988436   kube-system
	ef74cfcf93f82       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   17 seconds ago      Running             etcd                      1                   0d811681c1a68       etcd-newest-cni-988436                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-988436
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-988436
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81
	                    minikube.k8s.io/name=newest-cni-988436
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_06T19_57_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 06 Oct 2025 19:56:57 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-988436
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 06 Oct 2025 19:57:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 06 Oct 2025 19:57:29 +0000   Mon, 06 Oct 2025 19:56:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 06 Oct 2025 19:57:29 +0000   Mon, 06 Oct 2025 19:56:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 06 Oct 2025 19:57:29 +0000   Mon, 06 Oct 2025 19:56:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 06 Oct 2025 19:57:29 +0000   Mon, 06 Oct 2025 19:56:53 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-988436
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 2987c5e9cbd44110adb07016a441e3c4
	  System UUID:                073e7007-fd42-4128-9331-8ee710c3ffcc
	  Boot ID:                    28123a58-a718-41b5-bac7-da83ff2d3a0d
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-988436                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         37s
	  kube-system                 kindnet-v4krt                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-newest-cni-988436             250m (12%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-newest-cni-988436    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-wsgmd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-newest-cni-988436             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 29s                kube-proxy       
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  44s (x8 over 44s)  kubelet          Node newest-cni-988436 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 44s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 44s                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    44s (x8 over 44s)  kubelet          Node newest-cni-988436 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     44s (x8 over 44s)  kubelet          Node newest-cni-988436 status is now: NodeHasSufficientPID
	  Normal   Starting                 36s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 36s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     35s                kubelet          Node newest-cni-988436 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node newest-cni-988436 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node newest-cni-988436 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           31s                node-controller  Node newest-cni-988436 event: Registered Node newest-cni-988436 in Controller
	  Normal   Starting                 18s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  18s (x8 over 18s)  kubelet          Node newest-cni-988436 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18s (x8 over 18s)  kubelet          Node newest-cni-988436 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18s (x8 over 18s)  kubelet          Node newest-cni-988436 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-988436 event: Registered Node newest-cni-988436 in Controller
	
	
	==> dmesg <==
	[  +1.884868] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:28] overlayfs: idmapped layers are currently not supported
	[ +38.864395] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:29] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:30] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:31] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:35] overlayfs: idmapped layers are currently not supported
	[ +30.685217] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:37] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:39] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:41] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:48] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:49] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:50] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:52] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:54] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:55] overlayfs: idmapped layers are currently not supported
	[ +29.761517] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:56] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:57] overlayfs: idmapped layers are currently not supported
	[  +2.641672] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ef74cfcf93f82ed60cc7dc8b6e3da6bd2c5e5437877a94ad1f08dfe2f858aa0c] <==
	{"level":"warn","ts":"2025-10-06T19:57:25.839355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:25.916310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:25.988053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.068623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.093766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.168308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.208491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.257503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.336104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.383142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.400366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.453356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.509744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.562240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.630171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.778268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.831030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.968591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.016601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.067907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.112687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.132557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.153656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.196771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.287945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53130","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:57:37 up  1:39,  0 user,  load average: 5.30, 3.37, 2.35
	Linux newest-cni-988436 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5dfc72bf385ca72f9ef0e7c8285dbd5e03723e68c0e33f9d091cd08c682f638b] <==
	I1006 19:57:30.246721       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1006 19:57:30.246978       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1006 19:57:30.247073       1 main.go:148] setting mtu 1500 for CNI 
	I1006 19:57:30.247084       1 main.go:178] kindnetd IP family: "ipv4"
	I1006 19:57:30.247094       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-06T19:57:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1006 19:57:30.473887       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1006 19:57:30.473905       1 controller.go:381] "Waiting for informer caches to sync"
	I1006 19:57:30.473914       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1006 19:57:30.474191       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [f34d65961eeb0e039a90462d44820d5cb40290d6829def8b49b70d9904b4e966] <==
	I1006 19:57:29.239000       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1006 19:57:29.242481       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1006 19:57:29.254050       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1006 19:57:29.254147       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1006 19:57:29.254179       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1006 19:57:29.260013       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1006 19:57:29.260031       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1006 19:57:29.271655       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1006 19:57:29.273228       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1006 19:57:29.273707       1 aggregator.go:171] initial CRD sync complete...
	I1006 19:57:29.273795       1 autoregister_controller.go:144] Starting autoregister controller
	I1006 19:57:29.273825       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1006 19:57:29.275709       1 cache.go:39] Caches are synced for autoregister controller
	I1006 19:57:29.446887       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1006 19:57:29.455203       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1006 19:57:31.514676       1 controller.go:667] quota admission added evaluator for: namespaces
	I1006 19:57:31.622208       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1006 19:57:31.696272       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1006 19:57:31.727086       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1006 19:57:32.024648       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.144.3"}
	I1006 19:57:32.137189       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.218.23"}
	I1006 19:57:33.543988       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1006 19:57:33.591909       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1006 19:57:33.794671       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1006 19:57:33.847266       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [edfddc0e3f8d1c28acd50b5fa86583ed3677369806e4f0393d57c1cb3eba08dd] <==
	I1006 19:57:33.351748       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1006 19:57:33.354535       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1006 19:57:33.354567       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1006 19:57:33.354591       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1006 19:57:33.360387       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1006 19:57:33.382761       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1006 19:57:33.382844       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1006 19:57:33.385335       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1006 19:57:33.387841       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1006 19:57:33.389399       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1006 19:57:33.395829       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1006 19:57:33.395948       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1006 19:57:33.396014       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1006 19:57:33.396314       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1006 19:57:33.396788       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1006 19:57:33.404309       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 19:57:33.406728       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1006 19:57:33.425252       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1006 19:57:33.428821       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 19:57:33.428851       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1006 19:57:33.428859       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1006 19:57:33.431873       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1006 19:57:33.434061       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1006 19:57:33.449823       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1006 19:57:33.451940       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [00e8681df5f941509c743b69461398538f9a51c14d6c19d45910d346d99d8d58] <==
	I1006 19:57:31.268957       1 server_linux.go:53] "Using iptables proxy"
	I1006 19:57:31.373224       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1006 19:57:31.503050       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1006 19:57:31.503098       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1006 19:57:31.503170       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1006 19:57:31.693112       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 19:57:31.693167       1 server_linux.go:132] "Using iptables Proxier"
	I1006 19:57:31.762434       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1006 19:57:31.762806       1 server.go:527] "Version info" version="v1.34.1"
	I1006 19:57:31.762819       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:57:31.784514       1 config.go:200] "Starting service config controller"
	I1006 19:57:31.784539       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1006 19:57:31.784564       1 config.go:106] "Starting endpoint slice config controller"
	I1006 19:57:31.784569       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1006 19:57:31.784583       1 config.go:403] "Starting serviceCIDR config controller"
	I1006 19:57:31.784587       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1006 19:57:31.794089       1 config.go:309] "Starting node config controller"
	I1006 19:57:31.794117       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1006 19:57:31.794127       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1006 19:57:31.885574       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1006 19:57:31.885681       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1006 19:57:31.885761       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [134549c8cebf10c064c8c3afaf8ea9bc7932630b60156ffdc5ba7e9afcd15c21] <==
	I1006 19:57:25.616522       1 serving.go:386] Generated self-signed cert in-memory
	W1006 19:57:28.600104       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1006 19:57:28.600229       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1006 19:57:28.600266       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1006 19:57:28.600329       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1006 19:57:29.173708       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1006 19:57:29.173748       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:57:29.213591       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1006 19:57:29.213704       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:57:29.213720       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:57:29.213734       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1006 19:57:29.414154       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 06 19:57:23 newest-cni-988436 kubelet[729]: E1006 19:57:23.628338     729 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-988436\" not found" node="newest-cni-988436"
	Oct 06 19:57:28 newest-cni-988436 kubelet[729]: E1006 19:57:28.554898     729 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"newest-cni-988436\" not found"
	Oct 06 19:57:28 newest-cni-988436 kubelet[729]: I1006 19:57:28.723847     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-988436"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.330186     729 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-988436"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.330440     729 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-988436"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.330545     729 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: E1006 19:57:29.336319     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-988436\" already exists" pod="kube-system/kube-controller-manager-newest-cni-988436"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.336503     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-988436"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.337414     729 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: E1006 19:57:29.359052     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-988436\" already exists" pod="kube-system/kube-scheduler-newest-cni-988436"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.359234     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-988436"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.366632     729 apiserver.go:52] "Watching apiserver"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: E1006 19:57:29.396916     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-988436\" already exists" pod="kube-system/etcd-newest-cni-988436"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.397100     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-988436"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.408338     729 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.425070     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8b2c3ef8-c3bb-4e24-a72a-5a696590f257-cni-cfg\") pod \"kindnet-v4krt\" (UID: \"8b2c3ef8-c3bb-4e24-a72a-5a696590f257\") " pod="kube-system/kindnet-v4krt"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.425275     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b2c3ef8-c3bb-4e24-a72a-5a696590f257-xtables-lock\") pod \"kindnet-v4krt\" (UID: \"8b2c3ef8-c3bb-4e24-a72a-5a696590f257\") " pod="kube-system/kindnet-v4krt"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.425391     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2289712-8aa7-4ef1-909f-02322c74d8ee-xtables-lock\") pod \"kube-proxy-wsgmd\" (UID: \"b2289712-8aa7-4ef1-909f-02322c74d8ee\") " pod="kube-system/kube-proxy-wsgmd"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.425507     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2289712-8aa7-4ef1-909f-02322c74d8ee-lib-modules\") pod \"kube-proxy-wsgmd\" (UID: \"b2289712-8aa7-4ef1-909f-02322c74d8ee\") " pod="kube-system/kube-proxy-wsgmd"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.425639     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b2c3ef8-c3bb-4e24-a72a-5a696590f257-lib-modules\") pod \"kindnet-v4krt\" (UID: \"8b2c3ef8-c3bb-4e24-a72a-5a696590f257\") " pod="kube-system/kindnet-v4krt"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: E1006 19:57:29.457547     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-988436\" already exists" pod="kube-system/kube-apiserver-newest-cni-988436"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.499204     729 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 06 19:57:34 newest-cni-988436 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 06 19:57:34 newest-cni-988436 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 06 19:57:34 newest-cni-988436 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-988436 -n newest-cni-988436
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-988436 -n newest-cni-988436: exit status 2 (483.441521ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-988436 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-z6drc storage-provisioner dashboard-metrics-scraper-6ffb444bf9-57k92 kubernetes-dashboard-855c9754f9-7h2mx
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-988436 describe pod coredns-66bc5c9577-z6drc storage-provisioner dashboard-metrics-scraper-6ffb444bf9-57k92 kubernetes-dashboard-855c9754f9-7h2mx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-988436 describe pod coredns-66bc5c9577-z6drc storage-provisioner dashboard-metrics-scraper-6ffb444bf9-57k92 kubernetes-dashboard-855c9754f9-7h2mx: exit status 1 (122.96828ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-z6drc" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-57k92" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-7h2mx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-988436 describe pod coredns-66bc5c9577-z6drc storage-provisioner dashboard-metrics-scraper-6ffb444bf9-57k92 kubernetes-dashboard-855c9754f9-7h2mx: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-988436
helpers_test.go:243: (dbg) docker inspect newest-cni-988436:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9b780de2752c967e18231ad401a9aab982a50b731cd608d9fd3351a30367d6fd",
	        "Created": "2025-10-06T19:56:32.85241989Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213996,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T19:57:11.13183267Z",
	            "FinishedAt": "2025-10-06T19:57:10.181999701Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/9b780de2752c967e18231ad401a9aab982a50b731cd608d9fd3351a30367d6fd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9b780de2752c967e18231ad401a9aab982a50b731cd608d9fd3351a30367d6fd/hostname",
	        "HostsPath": "/var/lib/docker/containers/9b780de2752c967e18231ad401a9aab982a50b731cd608d9fd3351a30367d6fd/hosts",
	        "LogPath": "/var/lib/docker/containers/9b780de2752c967e18231ad401a9aab982a50b731cd608d9fd3351a30367d6fd/9b780de2752c967e18231ad401a9aab982a50b731cd608d9fd3351a30367d6fd-json.log",
	        "Name": "/newest-cni-988436",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-988436:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-988436",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9b780de2752c967e18231ad401a9aab982a50b731cd608d9fd3351a30367d6fd",
	                "LowerDir": "/var/lib/docker/overlay2/5c259bf9ca1cd1f255780dfe0febf52c4b8906ab233552c3762d9ba362419316-init/diff:/var/lib/docker/overlay2/950dad8a7486c25883a9845454dded3949e8f3a53166d005e6749ff210bcab80/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5c259bf9ca1cd1f255780dfe0febf52c4b8906ab233552c3762d9ba362419316/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5c259bf9ca1cd1f255780dfe0febf52c4b8906ab233552c3762d9ba362419316/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5c259bf9ca1cd1f255780dfe0febf52c4b8906ab233552c3762d9ba362419316/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-988436",
	                "Source": "/var/lib/docker/volumes/newest-cni-988436/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-988436",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-988436",
	                "name.minikube.sigs.k8s.io": "newest-cni-988436",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5cd6c68d9a2f028bf5bc25659aa7bdb9ba1d8e0ee9dd6fd113157ce6b681cc15",
	            "SandboxKey": "/var/run/docker/netns/5cd6c68d9a2f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-988436": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:50:06:b6:1c:00",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a65d7e56c8a8636a38fe861ce7ce76450c77f0c819639a82d76a33b2e2e5cd5c",
	                    "EndpointID": "308468a5247a62739bdd39e93340f27c1ae37d0bfa958b4878dfc856800f0745",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-988436",
	                        "9b780de2752c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-988436 -n newest-cni-988436
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-988436 -n newest-cni-988436: exit status 2 (420.059341ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-988436 logs -n 25
E1006 19:57:39.513271    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-988436 logs -n 25: (1.454139789s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-314275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:54 UTC │ 06 Oct 25 19:55 UTC │
	│ addons  │ enable metrics-server -p embed-certs-830393 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:54 UTC │                     │
	│ stop    │ -p embed-certs-830393 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ addons  │ enable dashboard -p embed-certs-830393 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ pause   │ -p no-preload-314275 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │                     │
	│ start   │ -p embed-certs-830393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:56 UTC │
	│ delete  │ -p no-preload-314275                                                                                                                                                                                                                          │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ delete  │ -p no-preload-314275                                                                                                                                                                                                                          │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ delete  │ -p disable-driver-mounts-932453                                                                                                                                                                                                               │ disable-driver-mounts-932453 │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ start   │ -p default-k8s-diff-port-997276 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:56 UTC │
	│ image   │ embed-certs-830393 image list --format=json                                                                                                                                                                                                   │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │ 06 Oct 25 19:56 UTC │
	│ pause   │ -p embed-certs-830393 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │                     │
	│ delete  │ -p embed-certs-830393                                                                                                                                                                                                                         │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │ 06 Oct 25 19:56 UTC │
	│ delete  │ -p embed-certs-830393                                                                                                                                                                                                                         │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │ 06 Oct 25 19:56 UTC │
	│ start   │ -p newest-cni-988436 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │ 06 Oct 25 19:57 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-997276 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-997276 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:57 UTC │
	│ addons  │ enable metrics-server -p newest-cni-988436 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │                     │
	│ stop    │ -p newest-cni-988436 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:57 UTC │
	│ addons  │ enable dashboard -p newest-cni-988436 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:57 UTC │
	│ start   │ -p newest-cni-988436 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:57 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-997276 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:57 UTC │
	│ start   │ -p default-k8s-diff-port-997276 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │                     │
	│ image   │ newest-cni-988436 image list --format=json                                                                                                                                                                                                    │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:57 UTC │
	│ pause   │ -p newest-cni-988436 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 19:57:12
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 19:57:12.457316  214483 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:57:12.457487  214483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:57:12.457516  214483 out.go:374] Setting ErrFile to fd 2...
	I1006 19:57:12.457536  214483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:57:12.457790  214483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:57:12.458182  214483 out.go:368] Setting JSON to false
	I1006 19:57:12.459059  214483 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5968,"bootTime":1759774665,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1006 19:57:12.459161  214483 start.go:140] virtualization:  
	I1006 19:57:12.463762  214483 out.go:179] * [default-k8s-diff-port-997276] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 19:57:12.467025  214483 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 19:57:12.467110  214483 notify.go:220] Checking for updates...
	I1006 19:57:12.472895  214483 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 19:57:12.475808  214483 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:57:12.478821  214483 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	I1006 19:57:12.481722  214483 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 19:57:12.484688  214483 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 19:57:12.488072  214483 config.go:182] Loaded profile config "default-k8s-diff-port-997276": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:57:12.488681  214483 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 19:57:12.513377  214483 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 19:57:12.513497  214483 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:57:12.573907  214483 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-06 19:57:12.564543979 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:57:12.574017  214483 docker.go:318] overlay module found
	I1006 19:57:12.577323  214483 out.go:179] * Using the docker driver based on existing profile
	I1006 19:57:12.580126  214483 start.go:304] selected driver: docker
	I1006 19:57:12.580146  214483 start.go:924] validating driver "docker" against &{Name:default-k8s-diff-port-997276 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-997276 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:57:12.580248  214483 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 19:57:12.580957  214483 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:57:12.648083  214483 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-06 19:57:12.638983786 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:57:12.648458  214483 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 19:57:12.648495  214483 cni.go:84] Creating CNI manager for ""
	I1006 19:57:12.648556  214483 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:57:12.648625  214483 start.go:348] cluster config:
	{Name:default-k8s-diff-port-997276 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-997276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:57:12.653680  214483 out.go:179] * Starting "default-k8s-diff-port-997276" primary control-plane node in "default-k8s-diff-port-997276" cluster
	I1006 19:57:12.656515  214483 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 19:57:12.659398  214483 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 19:57:12.663029  214483 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:57:12.663095  214483 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1006 19:57:12.663105  214483 cache.go:58] Caching tarball of preloaded images
	I1006 19:57:12.663158  214483 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 19:57:12.663196  214483 preload.go:233] Found /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1006 19:57:12.663206  214483 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 19:57:12.663326  214483 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/config.json ...
	I1006 19:57:12.682510  214483 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 19:57:12.682539  214483 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 19:57:12.682585  214483 cache.go:232] Successfully downloaded all kic artifacts
	I1006 19:57:12.682612  214483 start.go:360] acquireMachinesLock for default-k8s-diff-port-997276: {Name:mk7b25a356bfff93cc3ef03a69dea8b7e852b578 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 19:57:12.682671  214483 start.go:364] duration metric: took 37.482µs to acquireMachinesLock for "default-k8s-diff-port-997276"
	I1006 19:57:12.682696  214483 start.go:96] Skipping create...Using existing machine configuration
	I1006 19:57:12.682712  214483 fix.go:54] fixHost starting: 
	I1006 19:57:12.682991  214483 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997276 --format={{.State.Status}}
	I1006 19:57:12.699687  214483 fix.go:112] recreateIfNeeded on default-k8s-diff-port-997276: state=Stopped err=<nil>
	W1006 19:57:12.699754  214483 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 19:57:11.100406  213865 out.go:252] * Restarting existing docker container for "newest-cni-988436" ...
	I1006 19:57:11.100509  213865 cli_runner.go:164] Run: docker start newest-cni-988436
	I1006 19:57:11.358870  213865 cli_runner.go:164] Run: docker container inspect newest-cni-988436 --format={{.State.Status}}
	I1006 19:57:11.379256  213865 kic.go:430] container "newest-cni-988436" state is running.
	I1006 19:57:11.379760  213865 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-988436
	I1006 19:57:11.408631  213865 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/config.json ...
	I1006 19:57:11.408927  213865 machine.go:93] provisionDockerMachine start ...
	I1006 19:57:11.409009  213865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:57:11.439872  213865 main.go:141] libmachine: Using SSH client type: native
	I1006 19:57:11.440609  213865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33090 <nil> <nil>}
	I1006 19:57:11.440634  213865 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 19:57:11.441967  213865 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1006 19:57:14.579498  213865 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-988436
	
	I1006 19:57:14.579527  213865 ubuntu.go:182] provisioning hostname "newest-cni-988436"
	I1006 19:57:14.579590  213865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:57:14.598393  213865 main.go:141] libmachine: Using SSH client type: native
	I1006 19:57:14.598704  213865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33090 <nil> <nil>}
	I1006 19:57:14.598716  213865 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-988436 && echo "newest-cni-988436" | sudo tee /etc/hostname
	I1006 19:57:14.741265  213865 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-988436
	
	I1006 19:57:14.741340  213865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:57:14.759862  213865 main.go:141] libmachine: Using SSH client type: native
	I1006 19:57:14.760163  213865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33090 <nil> <nil>}
	I1006 19:57:14.760180  213865 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-988436' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-988436/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-988436' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 19:57:14.892059  213865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 19:57:14.892084  213865 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-2540/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-2540/.minikube}
	I1006 19:57:14.892120  213865 ubuntu.go:190] setting up certificates
	I1006 19:57:14.892133  213865 provision.go:84] configureAuth start
	I1006 19:57:14.892198  213865 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-988436
	I1006 19:57:14.910422  213865 provision.go:143] copyHostCerts
	I1006 19:57:14.910490  213865 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem, removing ...
	I1006 19:57:14.910508  213865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem
	I1006 19:57:14.910586  213865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem (1123 bytes)
	I1006 19:57:14.910682  213865 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem, removing ...
	I1006 19:57:14.910688  213865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem
	I1006 19:57:14.910713  213865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem (1675 bytes)
	I1006 19:57:14.910762  213865 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem, removing ...
	I1006 19:57:14.910767  213865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem
	I1006 19:57:14.910788  213865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem (1082 bytes)
	I1006 19:57:14.910869  213865 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem org=jenkins.newest-cni-988436 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-988436]
	I1006 19:57:15.410126  213865 provision.go:177] copyRemoteCerts
	I1006 19:57:15.410209  213865 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 19:57:15.410249  213865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:57:15.427357  213865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa Username:docker}
	I1006 19:57:15.523744  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 19:57:15.542092  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 19:57:15.559935  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1006 19:57:15.580163  213865 provision.go:87] duration metric: took 688.005611ms to configureAuth
	I1006 19:57:15.580195  213865 ubuntu.go:206] setting minikube options for container-runtime
	I1006 19:57:15.580433  213865 config.go:182] Loaded profile config "newest-cni-988436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:57:15.580555  213865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:57:15.598176  213865 main.go:141] libmachine: Using SSH client type: native
	I1006 19:57:15.598490  213865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33090 <nil> <nil>}
	I1006 19:57:15.598508  213865 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 19:57:12.703239  214483 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-997276" ...
	I1006 19:57:12.703321  214483 cli_runner.go:164] Run: docker start default-k8s-diff-port-997276
	I1006 19:57:13.041404  214483 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997276 --format={{.State.Status}}
	I1006 19:57:13.063462  214483 kic.go:430] container "default-k8s-diff-port-997276" state is running.
	I1006 19:57:13.063919  214483 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-997276
	I1006 19:57:13.086800  214483 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/config.json ...
	I1006 19:57:13.087044  214483 machine.go:93] provisionDockerMachine start ...
	I1006 19:57:13.087123  214483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:57:13.107124  214483 main.go:141] libmachine: Using SSH client type: native
	I1006 19:57:13.107438  214483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33095 <nil> <nil>}
	I1006 19:57:13.107455  214483 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 19:57:13.108941  214483 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1006 19:57:16.271499  214483 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-997276
	
	I1006 19:57:16.271522  214483 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-997276"
	I1006 19:57:16.271590  214483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:57:16.293026  214483 main.go:141] libmachine: Using SSH client type: native
	I1006 19:57:16.293362  214483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33095 <nil> <nil>}
	I1006 19:57:16.293375  214483 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-997276 && echo "default-k8s-diff-port-997276" | sudo tee /etc/hostname
	I1006 19:57:16.468233  214483 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-997276
	
	I1006 19:57:16.468308  214483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:57:16.501085  214483 main.go:141] libmachine: Using SSH client type: native
	I1006 19:57:16.501384  214483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33095 <nil> <nil>}
	I1006 19:57:16.501408  214483 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-997276' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-997276/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-997276' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 19:57:16.660487  214483 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 19:57:16.660517  214483 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-2540/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-2540/.minikube}
	I1006 19:57:16.660551  214483 ubuntu.go:190] setting up certificates
	I1006 19:57:16.660566  214483 provision.go:84] configureAuth start
	I1006 19:57:16.660631  214483 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-997276
	I1006 19:57:16.684354  214483 provision.go:143] copyHostCerts
	I1006 19:57:16.684433  214483 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem, removing ...
	I1006 19:57:16.684453  214483 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem
	I1006 19:57:16.684521  214483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem (1082 bytes)
	I1006 19:57:16.684619  214483 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem, removing ...
	I1006 19:57:16.684628  214483 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem
	I1006 19:57:16.684652  214483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem (1123 bytes)
	I1006 19:57:16.684707  214483 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem, removing ...
	I1006 19:57:16.684715  214483 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem
	I1006 19:57:16.684735  214483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem (1675 bytes)
	I1006 19:57:16.684787  214483 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-997276 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-997276 localhost minikube]
	I1006 19:57:17.091215  214483 provision.go:177] copyRemoteCerts
	I1006 19:57:17.091338  214483 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 19:57:17.091427  214483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:57:17.111346  214483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:57:17.213923  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 19:57:17.241686  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1006 19:57:17.265027  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 19:57:17.288261  214483 provision.go:87] duration metric: took 627.667072ms to configureAuth
	I1006 19:57:17.288283  214483 ubuntu.go:206] setting minikube options for container-runtime
	I1006 19:57:17.288487  214483 config.go:182] Loaded profile config "default-k8s-diff-port-997276": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:57:17.288594  214483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:57:17.329369  214483 main.go:141] libmachine: Using SSH client type: native
	I1006 19:57:17.329671  214483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33095 <nil> <nil>}
	I1006 19:57:17.329686  214483 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 19:57:15.871995  213865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 19:57:15.872018  213865 machine.go:96] duration metric: took 4.463081627s to provisionDockerMachine
	I1006 19:57:15.872030  213865 start.go:293] postStartSetup for "newest-cni-988436" (driver="docker")
	I1006 19:57:15.872040  213865 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 19:57:15.872107  213865 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 19:57:15.872147  213865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:57:15.892883  213865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa Username:docker}
	I1006 19:57:15.988128  213865 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 19:57:15.991880  213865 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 19:57:15.991912  213865 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 19:57:15.991941  213865 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/addons for local assets ...
	I1006 19:57:15.992005  213865 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/files for local assets ...
	I1006 19:57:15.992136  213865 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> 43502.pem in /etc/ssl/certs
	I1006 19:57:15.992245  213865 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 19:57:16.000130  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:57:16.018910  213865 start.go:296] duration metric: took 146.863291ms for postStartSetup
	I1006 19:57:16.019003  213865 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 19:57:16.019097  213865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:57:16.038016  213865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa Username:docker}
	I1006 19:57:16.137178  213865 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 19:57:16.147755  213865 fix.go:56] duration metric: took 5.067925969s for fixHost
	I1006 19:57:16.147830  213865 start.go:83] releasing machines lock for "newest-cni-988436", held for 5.068035739s
	I1006 19:57:16.147934  213865 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-988436
	I1006 19:57:16.169786  213865 ssh_runner.go:195] Run: cat /version.json
	I1006 19:57:16.169836  213865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:57:16.170069  213865 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 19:57:16.170127  213865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:57:16.191058  213865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa Username:docker}
	I1006 19:57:16.212989  213865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa Username:docker}
	I1006 19:57:16.287839  213865 ssh_runner.go:195] Run: systemctl --version
	I1006 19:57:16.386800  213865 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 19:57:16.427025  213865 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 19:57:16.431371  213865 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 19:57:16.431446  213865 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 19:57:16.440403  213865 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 19:57:16.440427  213865 start.go:495] detecting cgroup driver to use...
	I1006 19:57:16.440481  213865 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 19:57:16.440556  213865 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 19:57:16.457511  213865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 19:57:16.477175  213865 docker.go:218] disabling cri-docker service (if available) ...
	I1006 19:57:16.477239  213865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 19:57:16.496724  213865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 19:57:16.520234  213865 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 19:57:16.663112  213865 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 19:57:16.809948  213865 docker.go:234] disabling docker service ...
	I1006 19:57:16.810014  213865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 19:57:16.827479  213865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 19:57:16.841452  213865 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 19:57:16.975220  213865 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 19:57:17.140694  213865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 19:57:17.154904  213865 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 19:57:17.172592  213865 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 19:57:17.172665  213865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:17.181857  213865 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 19:57:17.181925  213865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:17.190804  213865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:17.199453  213865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:17.209166  213865 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 19:57:17.217629  213865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:17.228382  213865 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:17.237991  213865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:17.248893  213865 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 19:57:17.256749  213865 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 19:57:17.264603  213865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:57:17.413673  213865 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 19:57:17.568098  213865 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 19:57:17.568169  213865 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 19:57:17.574094  213865 start.go:563] Will wait 60s for crictl version
	I1006 19:57:17.574156  213865 ssh_runner.go:195] Run: which crictl
	I1006 19:57:17.578718  213865 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 19:57:17.625344  213865 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 19:57:17.625433  213865 ssh_runner.go:195] Run: crio --version
	I1006 19:57:17.655488  213865 ssh_runner.go:195] Run: crio --version
	I1006 19:57:17.695949  213865 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 19:57:17.699214  213865 cli_runner.go:164] Run: docker network inspect newest-cni-988436 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:57:17.725324  213865 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1006 19:57:17.731896  213865 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:57:17.746102  213865 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1006 19:57:17.749054  213865 kubeadm.go:883] updating cluster {Name:newest-cni-988436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-988436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 19:57:17.749200  213865 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:57:17.749279  213865 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:57:17.788012  213865 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:57:17.788038  213865 crio.go:433] Images already preloaded, skipping extraction
	I1006 19:57:17.788095  213865 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:57:17.814676  213865 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:57:17.814704  213865 cache_images.go:85] Images are preloaded, skipping loading
	I1006 19:57:17.814712  213865 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1006 19:57:17.814873  213865 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-988436 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-988436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 19:57:17.814982  213865 ssh_runner.go:195] Run: crio config
	I1006 19:57:17.904305  213865 cni.go:84] Creating CNI manager for ""
	I1006 19:57:17.904345  213865 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:57:17.904396  213865 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1006 19:57:17.904434  213865 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-988436 NodeName:newest-cni-988436 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 19:57:17.904582  213865 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-988436"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 19:57:17.904670  213865 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 19:57:17.927124  213865 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 19:57:17.927198  213865 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 19:57:17.935482  213865 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1006 19:57:17.950796  213865 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 19:57:17.966065  213865 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1006 19:57:17.980969  213865 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1006 19:57:17.984926  213865 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:57:17.995076  213865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:57:18.183336  213865 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:57:18.201198  213865 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436 for IP: 192.168.85.2
	I1006 19:57:18.201222  213865 certs.go:195] generating shared ca certs ...
	I1006 19:57:18.201238  213865 certs.go:227] acquiring lock for ca certs: {Name:mke29bef1b13829052576090d66d8864f7cbc64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:57:18.201375  213865 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key
	I1006 19:57:18.201431  213865 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key
	I1006 19:57:18.201442  213865 certs.go:257] generating profile certs ...
	I1006 19:57:18.201526  213865 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/client.key
	I1006 19:57:18.201595  213865 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.key.1ee6693d
	I1006 19:57:18.201637  213865 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/proxy-client.key
	I1006 19:57:18.201744  213865 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem (1338 bytes)
	W1006 19:57:18.201777  213865 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350_empty.pem, impossibly tiny 0 bytes
	I1006 19:57:18.201793  213865 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 19:57:18.201819  213865 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem (1082 bytes)
	I1006 19:57:18.201850  213865 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem (1123 bytes)
	I1006 19:57:18.201874  213865 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem (1675 bytes)
	I1006 19:57:18.201920  213865 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:57:18.202518  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 19:57:18.248587  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 19:57:18.284398  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 19:57:18.307310  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 19:57:18.334017  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 19:57:18.364395  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 19:57:18.411321  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 19:57:18.444657  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/newest-cni-988436/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 19:57:18.501894  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem --> /usr/share/ca-certificates/4350.pem (1338 bytes)
	I1006 19:57:18.544574  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /usr/share/ca-certificates/43502.pem (1708 bytes)
	I1006 19:57:18.572382  213865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 19:57:18.595247  213865 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 19:57:18.611789  213865 ssh_runner.go:195] Run: openssl version
	I1006 19:57:18.620090  213865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4350.pem && ln -fs /usr/share/ca-certificates/4350.pem /etc/ssl/certs/4350.pem"
	I1006 19:57:18.629422  213865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4350.pem
	I1006 19:57:18.633621  213865 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 18:49 /usr/share/ca-certificates/4350.pem
	I1006 19:57:18.633748  213865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4350.pem
	I1006 19:57:18.678305  213865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4350.pem /etc/ssl/certs/51391683.0"
	I1006 19:57:18.686586  213865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43502.pem && ln -fs /usr/share/ca-certificates/43502.pem /etc/ssl/certs/43502.pem"
	I1006 19:57:18.700089  213865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43502.pem
	I1006 19:57:18.704666  213865 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 18:49 /usr/share/ca-certificates/43502.pem
	I1006 19:57:18.704734  213865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43502.pem
	I1006 19:57:18.753665  213865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43502.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 19:57:18.761701  213865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 19:57:18.773022  213865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:57:18.777434  213865 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:57:18.777499  213865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:57:18.831033  213865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 19:57:18.839018  213865 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 19:57:18.843339  213865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 19:57:18.888578  213865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 19:57:18.936741  213865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 19:57:18.990548  213865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 19:57:19.066275  213865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 19:57:19.140783  213865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 19:57:19.226408  213865 kubeadm.go:400] StartCluster: {Name:newest-cni-988436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-988436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:57:19.226501  213865 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 19:57:19.226576  213865 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 19:57:19.320520  213865 cri.go:89] found id: "ef74cfcf93f82ed60cc7dc8b6e3da6bd2c5e5437877a94ad1f08dfe2f858aa0c"
	I1006 19:57:19.320544  213865 cri.go:89] found id: ""
	I1006 19:57:19.320596  213865 ssh_runner.go:195] Run: sudo runc list -f json
	W1006 19:57:19.337590  213865 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:57:19Z" level=error msg="open /run/runc: no such file or directory"
	I1006 19:57:19.337689  213865 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 19:57:19.357304  213865 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 19:57:19.357334  213865 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 19:57:19.357387  213865 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 19:57:19.400885  213865 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 19:57:19.401293  213865 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-988436" does not appear in /home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:57:19.401396  213865 kubeconfig.go:62] /home/jenkins/minikube-integration/21701-2540/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-988436" cluster setting kubeconfig missing "newest-cni-988436" context setting]
	I1006 19:57:19.401697  213865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/kubeconfig: {Name:mkf8cce8f9dace5c4d41967d6ad8df5e03ba53f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:57:19.402904  213865 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 19:57:19.419742  213865 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1006 19:57:19.419829  213865 kubeadm.go:601] duration metric: took 62.487145ms to restartPrimaryControlPlane
	I1006 19:57:19.419853  213865 kubeadm.go:402] duration metric: took 193.454555ms to StartCluster
	I1006 19:57:19.419900  213865 settings.go:142] acquiring lock: {Name:mkaade96c66593c653256adaeb0a3029ca60e0c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:57:19.420097  213865 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:57:19.420792  213865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/kubeconfig: {Name:mkf8cce8f9dace5c4d41967d6ad8df5e03ba53f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:57:19.421118  213865 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 19:57:19.421356  213865 config.go:182] Loaded profile config "newest-cni-988436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:57:19.421370  213865 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 19:57:19.421781  213865 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-988436"
	I1006 19:57:19.421796  213865 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-988436"
	W1006 19:57:19.421803  213865 addons.go:247] addon storage-provisioner should already be in state true
	I1006 19:57:19.421826  213865 host.go:66] Checking if "newest-cni-988436" exists ...
	I1006 19:57:19.421836  213865 addons.go:69] Setting dashboard=true in profile "newest-cni-988436"
	I1006 19:57:19.421853  213865 addons.go:238] Setting addon dashboard=true in "newest-cni-988436"
	W1006 19:57:19.421871  213865 addons.go:247] addon dashboard should already be in state true
	I1006 19:57:19.421895  213865 host.go:66] Checking if "newest-cni-988436" exists ...
	I1006 19:57:19.422277  213865 cli_runner.go:164] Run: docker container inspect newest-cni-988436 --format={{.State.Status}}
	I1006 19:57:19.422394  213865 cli_runner.go:164] Run: docker container inspect newest-cni-988436 --format={{.State.Status}}
	I1006 19:57:19.422765  213865 addons.go:69] Setting default-storageclass=true in profile "newest-cni-988436"
	I1006 19:57:19.422789  213865 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-988436"
	I1006 19:57:19.423069  213865 cli_runner.go:164] Run: docker container inspect newest-cni-988436 --format={{.State.Status}}
	I1006 19:57:19.425885  213865 out.go:179] * Verifying Kubernetes components...
	I1006 19:57:19.431961  213865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:57:19.481842  213865 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 19:57:19.481897  213865 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1006 19:57:19.484808  213865 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 19:57:19.484833  213865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 19:57:19.484902  213865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:57:19.488919  213865 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1006 19:57:17.694447  214483 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 19:57:17.694478  214483 machine.go:96] duration metric: took 4.607416378s to provisionDockerMachine
	I1006 19:57:17.694506  214483 start.go:293] postStartSetup for "default-k8s-diff-port-997276" (driver="docker")
	I1006 19:57:17.694523  214483 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 19:57:17.694678  214483 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 19:57:17.694750  214483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:57:17.728502  214483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:57:17.838424  214483 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 19:57:17.842688  214483 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 19:57:17.842715  214483 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 19:57:17.842725  214483 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/addons for local assets ...
	I1006 19:57:17.842783  214483 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/files for local assets ...
	I1006 19:57:17.842880  214483 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> 43502.pem in /etc/ssl/certs
	I1006 19:57:17.842986  214483 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 19:57:17.851382  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:57:17.871148  214483 start.go:296] duration metric: took 176.620175ms for postStartSetup
	I1006 19:57:17.871274  214483 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 19:57:17.871359  214483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:57:17.903012  214483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:57:18.011889  214483 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 19:57:18.027883  214483 fix.go:56] duration metric: took 5.345167136s for fixHost
	I1006 19:57:18.027909  214483 start.go:83] releasing machines lock for "default-k8s-diff-port-997276", held for 5.345223318s
	I1006 19:57:18.027986  214483 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-997276
	I1006 19:57:18.068131  214483 ssh_runner.go:195] Run: cat /version.json
	I1006 19:57:18.068191  214483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:57:18.068211  214483 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 19:57:18.068272  214483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:57:18.099828  214483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:57:18.116183  214483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:57:18.212328  214483 ssh_runner.go:195] Run: systemctl --version
	I1006 19:57:18.322713  214483 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 19:57:18.385651  214483 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 19:57:18.391829  214483 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 19:57:18.391987  214483 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 19:57:18.404019  214483 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 19:57:18.404043  214483 start.go:495] detecting cgroup driver to use...
	I1006 19:57:18.404076  214483 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 19:57:18.404123  214483 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 19:57:18.424988  214483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 19:57:18.446991  214483 docker.go:218] disabling cri-docker service (if available) ...
	I1006 19:57:18.447068  214483 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 19:57:18.472701  214483 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 19:57:18.489522  214483 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 19:57:18.692900  214483 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 19:57:18.858664  214483 docker.go:234] disabling docker service ...
	I1006 19:57:18.858727  214483 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 19:57:18.875984  214483 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 19:57:18.890542  214483 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 19:57:19.051290  214483 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 19:57:19.260867  214483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 19:57:19.280181  214483 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 19:57:19.302063  214483 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 19:57:19.302197  214483 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:19.326820  214483 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 19:57:19.326900  214483 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:19.340732  214483 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:19.356307  214483 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:19.369072  214483 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 19:57:19.378077  214483 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:19.394503  214483 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:19.412997  214483 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:19.447857  214483 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 19:57:19.492047  214483 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 19:57:19.520732  214483 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:57:19.773084  214483 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 19:57:19.983369  214483 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 19:57:19.983431  214483 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 19:57:19.989013  214483 start.go:563] Will wait 60s for crictl version
	I1006 19:57:19.989106  214483 ssh_runner.go:195] Run: which crictl
	I1006 19:57:19.995066  214483 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 19:57:20.049210  214483 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 19:57:20.049295  214483 ssh_runner.go:195] Run: crio --version
	I1006 19:57:20.109913  214483 ssh_runner.go:195] Run: crio --version
	I1006 19:57:20.162999  214483 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 19:57:19.496197  213865 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1006 19:57:19.496218  213865 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1006 19:57:19.496291  213865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:57:19.497183  213865 addons.go:238] Setting addon default-storageclass=true in "newest-cni-988436"
	W1006 19:57:19.497205  213865 addons.go:247] addon default-storageclass should already be in state true
	I1006 19:57:19.497232  213865 host.go:66] Checking if "newest-cni-988436" exists ...
	I1006 19:57:19.497654  213865 cli_runner.go:164] Run: docker container inspect newest-cni-988436 --format={{.State.Status}}
	I1006 19:57:19.547945  213865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa Username:docker}
	I1006 19:57:19.553117  213865 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 19:57:19.553137  213865 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 19:57:19.553199  213865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-988436
	I1006 19:57:19.562006  213865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa Username:docker}
	I1006 19:57:19.589630  213865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/newest-cni-988436/id_rsa Username:docker}
	I1006 19:57:19.908554  213865 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1006 19:57:19.908580  213865 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1006 19:57:19.943179  213865 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:57:19.978739  213865 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1006 19:57:19.978826  213865 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1006 19:57:19.980118  213865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 19:57:20.014626  213865 api_server.go:52] waiting for apiserver process to appear ...
	I1006 19:57:20.014729  213865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 19:57:20.076314  213865 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1006 19:57:20.076342  213865 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1006 19:57:20.079777  213865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 19:57:20.196271  213865 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1006 19:57:20.196297  213865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1006 19:57:20.290794  213865 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1006 19:57:20.290817  213865 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1006 19:57:20.363499  213865 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1006 19:57:20.363525  213865 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1006 19:57:20.405880  213865 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1006 19:57:20.405907  213865 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1006 19:57:20.425331  213865 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1006 19:57:20.425357  213865 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1006 19:57:20.461297  213865 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1006 19:57:20.461323  213865 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1006 19:57:20.499420  213865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1006 19:57:20.166049  214483 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-997276 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:57:20.198340  214483 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1006 19:57:20.203043  214483 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:57:20.220911  214483 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-997276 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-997276 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 19:57:20.221031  214483 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:57:20.221087  214483 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:57:20.300882  214483 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:57:20.300908  214483 crio.go:433] Images already preloaded, skipping extraction
	I1006 19:57:20.300965  214483 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:57:20.351095  214483 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:57:20.351119  214483 cache_images.go:85] Images are preloaded, skipping loading
	I1006 19:57:20.351128  214483 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1006 19:57:20.351239  214483 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-997276 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-997276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 19:57:20.351328  214483 ssh_runner.go:195] Run: crio config
	I1006 19:57:20.463477  214483 cni.go:84] Creating CNI manager for ""
	I1006 19:57:20.463497  214483 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:57:20.463523  214483 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 19:57:20.463551  214483 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-997276 NodeName:default-k8s-diff-port-997276 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 19:57:20.463752  214483 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-997276"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 19:57:20.463839  214483 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 19:57:20.473040  214483 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 19:57:20.473158  214483 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 19:57:20.482196  214483 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1006 19:57:20.500594  214483 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 19:57:20.519950  214483 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1006 19:57:20.541495  214483 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1006 19:57:20.552400  214483 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:57:20.567976  214483 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:57:20.775832  214483 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:57:20.816179  214483 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276 for IP: 192.168.76.2
	I1006 19:57:20.816256  214483 certs.go:195] generating shared ca certs ...
	I1006 19:57:20.816289  214483 certs.go:227] acquiring lock for ca certs: {Name:mke29bef1b13829052576090d66d8864f7cbc64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:57:20.816489  214483 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key
	I1006 19:57:20.816566  214483 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key
	I1006 19:57:20.816601  214483 certs.go:257] generating profile certs ...
	I1006 19:57:20.816733  214483 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/client.key
	I1006 19:57:20.816820  214483 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.key.24aba503
	I1006 19:57:20.816890  214483 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/proxy-client.key
	I1006 19:57:20.817035  214483 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem (1338 bytes)
	W1006 19:57:20.817091  214483 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350_empty.pem, impossibly tiny 0 bytes
	I1006 19:57:20.817117  214483 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 19:57:20.817175  214483 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem (1082 bytes)
	I1006 19:57:20.817225  214483 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem (1123 bytes)
	I1006 19:57:20.817263  214483 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem (1675 bytes)
	I1006 19:57:20.817333  214483 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:57:20.817933  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 19:57:20.838075  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 19:57:20.872584  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 19:57:20.905243  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 19:57:20.936636  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1006 19:57:20.966840  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 19:57:21.007071  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 19:57:21.065101  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 19:57:21.117223  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /usr/share/ca-certificates/43502.pem (1708 bytes)
	I1006 19:57:21.176481  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 19:57:21.223765  214483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem --> /usr/share/ca-certificates/4350.pem (1338 bytes)
	I1006 19:57:21.280982  214483 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 19:57:21.312101  214483 ssh_runner.go:195] Run: openssl version
	I1006 19:57:21.319545  214483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 19:57:21.331611  214483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:57:21.344080  214483 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:57:21.344200  214483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:57:21.393852  214483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 19:57:21.402132  214483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4350.pem && ln -fs /usr/share/ca-certificates/4350.pem /etc/ssl/certs/4350.pem"
	I1006 19:57:21.414268  214483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4350.pem
	I1006 19:57:21.419537  214483 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 18:49 /usr/share/ca-certificates/4350.pem
	I1006 19:57:21.419621  214483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4350.pem
	I1006 19:57:21.462093  214483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4350.pem /etc/ssl/certs/51391683.0"
	I1006 19:57:21.476198  214483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43502.pem && ln -fs /usr/share/ca-certificates/43502.pem /etc/ssl/certs/43502.pem"
	I1006 19:57:21.485631  214483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43502.pem
	I1006 19:57:21.492344  214483 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 18:49 /usr/share/ca-certificates/43502.pem
	I1006 19:57:21.492472  214483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43502.pem
	I1006 19:57:21.540609  214483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43502.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 19:57:21.548639  214483 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 19:57:21.553095  214483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 19:57:21.597377  214483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 19:57:21.640850  214483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 19:57:21.752718  214483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 19:57:21.866368  214483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 19:57:22.021521  214483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 19:57:22.180891  214483 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-997276 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-997276 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:57:22.181007  214483 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 19:57:22.181092  214483 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 19:57:22.306699  214483 cri.go:89] found id: "69016b9f99d774f122c6a833714dc8aa9a9bed369630bb89cab483dc73bb2308"
	I1006 19:57:22.306733  214483 cri.go:89] found id: "d55af54186cd2219cd4bae48a2441c417921efdadcb8fc6384bac72028d350db"
	I1006 19:57:22.306738  214483 cri.go:89] found id: "e8361e8e1ee0632342d163d9478d496057250b3eeebfb0c28f6c1c0c8879ec24"
	I1006 19:57:22.306750  214483 cri.go:89] found id: "3ccfad32d3fc760df3c3a85997578af79aef13072ed1d94e610c623e692996c5"
	I1006 19:57:22.306754  214483 cri.go:89] found id: ""
	I1006 19:57:22.306814  214483 ssh_runner.go:195] Run: sudo runc list -f json
	W1006 19:57:22.349254  214483 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:57:22Z" level=error msg="open /run/runc: no such file or directory"
	I1006 19:57:22.349355  214483 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 19:57:22.368086  214483 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 19:57:22.368156  214483 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 19:57:22.368238  214483 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 19:57:22.385006  214483 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 19:57:22.385611  214483 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-997276" does not appear in /home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:57:22.385885  214483 kubeconfig.go:62] /home/jenkins/minikube-integration/21701-2540/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-997276" cluster setting kubeconfig missing "default-k8s-diff-port-997276" context setting]
	I1006 19:57:22.386441  214483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/kubeconfig: {Name:mkf8cce8f9dace5c4d41967d6ad8df5e03ba53f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:57:22.388197  214483 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 19:57:22.409168  214483 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1006 19:57:22.409203  214483 kubeadm.go:601] duration metric: took 41.029838ms to restartPrimaryControlPlane
	I1006 19:57:22.409212  214483 kubeadm.go:402] duration metric: took 228.331748ms to StartCluster
	I1006 19:57:22.409227  214483 settings.go:142] acquiring lock: {Name:mkaade96c66593c653256adaeb0a3029ca60e0c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:57:22.409301  214483 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:57:22.410280  214483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/kubeconfig: {Name:mkf8cce8f9dace5c4d41967d6ad8df5e03ba53f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:57:22.410523  214483 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 19:57:22.410925  214483 config.go:182] Loaded profile config "default-k8s-diff-port-997276": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:57:22.410886  214483 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 19:57:22.410974  214483 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-997276"
	I1006 19:57:22.410978  214483 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-997276"
	I1006 19:57:22.410989  214483 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-997276"
	I1006 19:57:22.410990  214483 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-997276"
	W1006 19:57:22.410997  214483 addons.go:247] addon storage-provisioner should already be in state true
	I1006 19:57:22.411016  214483 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-997276"
	I1006 19:57:22.411022  214483 host.go:66] Checking if "default-k8s-diff-port-997276" exists ...
	I1006 19:57:22.411028  214483 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-997276"
	I1006 19:57:22.411451  214483 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997276 --format={{.State.Status}}
	I1006 19:57:22.411460  214483 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997276 --format={{.State.Status}}
	W1006 19:57:22.410998  214483 addons.go:247] addon dashboard should already be in state true
	I1006 19:57:22.415692  214483 host.go:66] Checking if "default-k8s-diff-port-997276" exists ...
	I1006 19:57:22.416316  214483 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997276 --format={{.State.Status}}
	I1006 19:57:22.425752  214483 out.go:179] * Verifying Kubernetes components...
	I1006 19:57:22.435321  214483 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:57:22.468709  214483 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 19:57:22.469735  214483 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-997276"
	W1006 19:57:22.469752  214483 addons.go:247] addon default-storageclass should already be in state true
	I1006 19:57:22.469777  214483 host.go:66] Checking if "default-k8s-diff-port-997276" exists ...
	I1006 19:57:22.470208  214483 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997276 --format={{.State.Status}}
	I1006 19:57:22.476480  214483 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 19:57:22.476511  214483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 19:57:22.476582  214483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:57:22.492407  214483 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1006 19:57:22.501964  214483 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1006 19:57:22.504967  214483 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1006 19:57:22.504992  214483 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1006 19:57:22.505058  214483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:57:22.512375  214483 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 19:57:22.512410  214483 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 19:57:22.512475  214483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:57:22.535860  214483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:57:22.551596  214483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:57:22.561543  214483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:57:22.849689  214483 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:57:22.892747  214483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 19:57:22.940973  214483 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1006 19:57:22.940994  214483 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1006 19:57:22.994971  214483 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-997276" to be "Ready" ...
	I1006 19:57:23.014365  214483 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1006 19:57:23.014389  214483 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1006 19:57:23.115077  214483 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1006 19:57:23.115098  214483 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1006 19:57:23.115788  214483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 19:57:23.196246  214483 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1006 19:57:23.196270  214483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1006 19:57:23.328901  214483 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1006 19:57:23.328974  214483 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1006 19:57:23.411290  214483 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1006 19:57:23.411314  214483 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1006 19:57:23.523427  214483 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1006 19:57:23.523452  214483 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1006 19:57:23.568219  214483 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1006 19:57:23.568243  214483 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1006 19:57:23.606884  214483 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1006 19:57:23.606909  214483 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1006 19:57:23.643400  214483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1006 19:57:32.041277  213865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (12.06109049s)
	I1006 19:57:32.041348  213865 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (12.02659519s)
	I1006 19:57:32.041363  213865 api_server.go:72] duration metric: took 12.619912424s to wait for apiserver process to appear ...
	I1006 19:57:32.041374  213865 api_server.go:88] waiting for apiserver healthz status ...
	I1006 19:57:32.041392  213865 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1006 19:57:32.041711  213865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.961909137s)
	I1006 19:57:32.093122  213865 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1006 19:57:32.109309  213865 api_server.go:141] control plane version: v1.34.1
	I1006 19:57:32.109341  213865 api_server.go:131] duration metric: took 67.960062ms to wait for apiserver health ...
	I1006 19:57:32.109351  213865 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 19:57:32.126413  213865 system_pods.go:59] 8 kube-system pods found
	I1006 19:57:32.126450  213865 system_pods.go:61] "coredns-66bc5c9577-z6drc" [4f782721-b2ed-4a40-9181-d83ac1315d08] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1006 19:57:32.126460  213865 system_pods.go:61] "etcd-newest-cni-988436" [b27477b1-584a-48dd-964f-383c3f41e66f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 19:57:32.126466  213865 system_pods.go:61] "kindnet-v4krt" [8b2c3ef8-c3bb-4e24-a72a-5a696590f257] Running
	I1006 19:57:32.126474  213865 system_pods.go:61] "kube-apiserver-newest-cni-988436" [21da4d62-1ceb-4988-a95f-d00aeed96f63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 19:57:32.126480  213865 system_pods.go:61] "kube-controller-manager-newest-cni-988436" [b07be9c3-cd1b-4026-8c91-76ab67ef61df] Running
	I1006 19:57:32.126490  213865 system_pods.go:61] "kube-proxy-wsgmd" [b2289712-8aa7-4ef1-909f-02322c74d8ee] Running
	I1006 19:57:32.126497  213865 system_pods.go:61] "kube-scheduler-newest-cni-988436" [e8d89cea-a1d3-4c7b-ac59-50ea8df07dcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 19:57:32.126505  213865 system_pods.go:61] "storage-provisioner" [6120daa3-8711-44b0-8951-f629755eb03c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1006 19:57:32.126512  213865 system_pods.go:74] duration metric: took 17.15569ms to wait for pod list to return data ...
	I1006 19:57:32.126525  213865 default_sa.go:34] waiting for default service account to be created ...
	I1006 19:57:32.149057  213865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (11.64959263s)
	I1006 19:57:32.152464  213865 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-988436 addons enable metrics-server
	
	I1006 19:57:32.152968  213865 default_sa.go:45] found service account: "default"
	I1006 19:57:32.153018  213865 default_sa.go:55] duration metric: took 26.48621ms for default service account to be created ...
	I1006 19:57:32.153045  213865 kubeadm.go:586] duration metric: took 12.731592032s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1006 19:57:32.153091  213865 node_conditions.go:102] verifying NodePressure condition ...
	I1006 19:57:32.159427  213865 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1006 19:57:32.162497  213865 addons.go:514] duration metric: took 12.741122933s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1006 19:57:32.163075  213865 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 19:57:32.163098  213865 node_conditions.go:123] node cpu capacity is 2
	I1006 19:57:32.163110  213865 node_conditions.go:105] duration metric: took 9.998766ms to run NodePressure ...
	I1006 19:57:32.163123  213865 start.go:241] waiting for startup goroutines ...
	I1006 19:57:32.163130  213865 start.go:246] waiting for cluster config update ...
	I1006 19:57:32.163142  213865 start.go:255] writing updated cluster config ...
	I1006 19:57:32.163437  213865 ssh_runner.go:195] Run: rm -f paused
	I1006 19:57:32.255597  213865 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1006 19:57:32.259239  213865 out.go:179] * Done! kubectl is now configured to use "newest-cni-988436" cluster and "default" namespace by default
	I1006 19:57:29.893107  214483 node_ready.go:49] node "default-k8s-diff-port-997276" is "Ready"
	I1006 19:57:29.893140  214483 node_ready.go:38] duration metric: took 6.898129445s for node "default-k8s-diff-port-997276" to be "Ready" ...
	I1006 19:57:29.893155  214483 api_server.go:52] waiting for apiserver process to appear ...
	I1006 19:57:29.893213  214483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 19:57:33.843830  214483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.951040231s)
	I1006 19:57:33.843887  214483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.728078847s)
	I1006 19:57:33.844135  214483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.200682947s)
	I1006 19:57:33.844278  214483 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.951048593s)
	I1006 19:57:33.844299  214483 api_server.go:72] duration metric: took 11.433738908s to wait for apiserver process to appear ...
	I1006 19:57:33.844305  214483 api_server.go:88] waiting for apiserver healthz status ...
	I1006 19:57:33.844323  214483 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1006 19:57:33.846950  214483 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-997276 addons enable metrics-server
	
	I1006 19:57:33.867502  214483 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1006 19:57:33.877208  214483 api_server.go:141] control plane version: v1.34.1
	I1006 19:57:33.877240  214483 api_server.go:131] duration metric: took 32.928865ms to wait for apiserver health ...
	I1006 19:57:33.877249  214483 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 19:57:33.889547  214483 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1006 19:57:33.892695  214483 addons.go:514] duration metric: took 11.481803347s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1006 19:57:33.895319  214483 system_pods.go:59] 8 kube-system pods found
	I1006 19:57:33.895353  214483 system_pods.go:61] "coredns-66bc5c9577-bns67" [89f11c5e-2682-4227-80ba-2fe8b97c1629] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:57:33.895363  214483 system_pods.go:61] "etcd-default-k8s-diff-port-997276" [721c87e6-60f9-41e8-ac17-f6029a04e17e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 19:57:33.895369  214483 system_pods.go:61] "kindnet-twtwt" [e281e8b3-9cc4-41fb-8e22-a66ef4e23a38] Running
	I1006 19:57:33.895377  214483 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-997276" [7d10547b-e1b4-4af4-8da7-f4d28894afd4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 19:57:33.895383  214483 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-997276" [a477444a-eb40-45c8-8588-842a7de0164e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 19:57:33.895388  214483 system_pods.go:61] "kube-proxy-zl7gg" [05397544-ebaf-4f98-8762-9ede9c706bc9] Running
	I1006 19:57:33.895395  214483 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-997276" [1b4b8d6e-f4d6-40f1-99eb-2d33fc3533a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 19:57:33.895399  214483 system_pods.go:61] "storage-provisioner" [3cd050f9-3953-4804-bbda-79ae9e50cf67] Running
	I1006 19:57:33.895405  214483 system_pods.go:74] duration metric: took 18.149834ms to wait for pod list to return data ...
	I1006 19:57:33.895412  214483 default_sa.go:34] waiting for default service account to be created ...
	I1006 19:57:33.898626  214483 default_sa.go:45] found service account: "default"
	I1006 19:57:33.898654  214483 default_sa.go:55] duration metric: took 3.235383ms for default service account to be created ...
	I1006 19:57:33.898664  214483 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 19:57:33.903206  214483 system_pods.go:86] 8 kube-system pods found
	I1006 19:57:33.903243  214483 system_pods.go:89] "coredns-66bc5c9577-bns67" [89f11c5e-2682-4227-80ba-2fe8b97c1629] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 19:57:33.903255  214483 system_pods.go:89] "etcd-default-k8s-diff-port-997276" [721c87e6-60f9-41e8-ac17-f6029a04e17e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 19:57:33.903261  214483 system_pods.go:89] "kindnet-twtwt" [e281e8b3-9cc4-41fb-8e22-a66ef4e23a38] Running
	I1006 19:57:33.903269  214483 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-997276" [7d10547b-e1b4-4af4-8da7-f4d28894afd4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 19:57:33.903278  214483 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-997276" [a477444a-eb40-45c8-8588-842a7de0164e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 19:57:33.903286  214483 system_pods.go:89] "kube-proxy-zl7gg" [05397544-ebaf-4f98-8762-9ede9c706bc9] Running
	I1006 19:57:33.903295  214483 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-997276" [1b4b8d6e-f4d6-40f1-99eb-2d33fc3533a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 19:57:33.903308  214483 system_pods.go:89] "storage-provisioner" [3cd050f9-3953-4804-bbda-79ae9e50cf67] Running
	I1006 19:57:33.903316  214483 system_pods.go:126] duration metric: took 4.646921ms to wait for k8s-apps to be running ...
	I1006 19:57:33.903327  214483 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 19:57:33.903380  214483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:57:33.926146  214483 system_svc.go:56] duration metric: took 22.790289ms WaitForService to wait for kubelet
	I1006 19:57:33.926222  214483 kubeadm.go:586] duration metric: took 11.515659311s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 19:57:33.926257  214483 node_conditions.go:102] verifying NodePressure condition ...
	I1006 19:57:33.945132  214483 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 19:57:33.945213  214483 node_conditions.go:123] node cpu capacity is 2
	I1006 19:57:33.945238  214483 node_conditions.go:105] duration metric: took 18.964512ms to run NodePressure ...
	I1006 19:57:33.945262  214483 start.go:241] waiting for startup goroutines ...
	I1006 19:57:33.945302  214483 start.go:246] waiting for cluster config update ...
	I1006 19:57:33.945338  214483 start.go:255] writing updated cluster config ...
	I1006 19:57:33.945657  214483 ssh_runner.go:195] Run: rm -f paused
	I1006 19:57:33.951121  214483 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 19:57:33.959929  214483 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bns67" in "kube-system" namespace to be "Ready" or be gone ...
	W1006 19:57:35.981599  214483 pod_ready.go:104] pod "coredns-66bc5c9577-bns67" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.706160378Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.720471429Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=5fe7dd2a-b2ec-4195-9221-40b78db90652 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.742261937Z" level=info msg="Ran pod sandbox 45d1d8eb3837633e6bbf978816e1006e6520694fbd8fd801b6229b166d19aa3b with infra container: kube-system/kindnet-v4krt/POD" id=5fe7dd2a-b2ec-4195-9221-40b78db90652 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.783985393Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-wsgmd/POD" id=83159863-0b44-4547-96f3-d719f5596566 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.784044053Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.795307359Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=83159863-0b44-4547-96f3-d719f5596566 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.811028725Z" level=info msg="Ran pod sandbox 3e9b11bc6a5ba5294cb27f8bb6b8d42d84a1640d5b5093b093b408af354b6145 with infra container: kube-system/kube-proxy-wsgmd/POD" id=83159863-0b44-4547-96f3-d719f5596566 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.811868954Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=acfaac53-02e4-47ad-bf1f-aed72cbc5d87 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.854051569Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=8dad303a-aea2-4355-b6f3-d9b911dfc312 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.864193435Z" level=info msg="Creating container: kube-system/kindnet-v4krt/kindnet-cni" id=0d6d1a5b-14e2-4135-9eec-f2520365b4c7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.864552835Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.866445776Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=8e59c6f6-3006-4d34-9d3b-fb4ea16b7cc1 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.882239792Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=4b3b15a3-286f-44e9-acb5-e38fdaa389d1 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.883965436Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.884728265Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.89537368Z" level=info msg="Creating container: kube-system/kube-proxy-wsgmd/kube-proxy" id=0d197ae7-e448-4be3-9ba4-31d5d61301e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.895910568Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.940813341Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:57:29 newest-cni-988436 crio[615]: time="2025-10-06T19:57:29.941735819Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:57:30 newest-cni-988436 crio[615]: time="2025-10-06T19:57:30.095120544Z" level=info msg="Created container 5dfc72bf385ca72f9ef0e7c8285dbd5e03723e68c0e33f9d091cd08c682f638b: kube-system/kindnet-v4krt/kindnet-cni" id=0d6d1a5b-14e2-4135-9eec-f2520365b4c7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:57:30 newest-cni-988436 crio[615]: time="2025-10-06T19:57:30.103266853Z" level=info msg="Starting container: 5dfc72bf385ca72f9ef0e7c8285dbd5e03723e68c0e33f9d091cd08c682f638b" id=3b524d9b-fc91-4fb9-990b-fc4e4a448308 name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 19:57:30 newest-cni-988436 crio[615]: time="2025-10-06T19:57:30.11042066Z" level=info msg="Started container" PID=1065 containerID=5dfc72bf385ca72f9ef0e7c8285dbd5e03723e68c0e33f9d091cd08c682f638b description=kube-system/kindnet-v4krt/kindnet-cni id=3b524d9b-fc91-4fb9-990b-fc4e4a448308 name=/runtime.v1.RuntimeService/StartContainer sandboxID=45d1d8eb3837633e6bbf978816e1006e6520694fbd8fd801b6229b166d19aa3b
	Oct 06 19:57:30 newest-cni-988436 crio[615]: time="2025-10-06T19:57:30.249821298Z" level=info msg="Created container 00e8681df5f941509c743b69461398538f9a51c14d6c19d45910d346d99d8d58: kube-system/kube-proxy-wsgmd/kube-proxy" id=0d197ae7-e448-4be3-9ba4-31d5d61301e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:57:30 newest-cni-988436 crio[615]: time="2025-10-06T19:57:30.251290322Z" level=info msg="Starting container: 00e8681df5f941509c743b69461398538f9a51c14d6c19d45910d346d99d8d58" id=31f2eacb-8a31-4dd7-a038-649b85a5cdfc name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 19:57:30 newest-cni-988436 crio[615]: time="2025-10-06T19:57:30.260027469Z" level=info msg="Started container" PID=1066 containerID=00e8681df5f941509c743b69461398538f9a51c14d6c19d45910d346d99d8d58 description=kube-system/kube-proxy-wsgmd/kube-proxy id=31f2eacb-8a31-4dd7-a038-649b85a5cdfc name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e9b11bc6a5ba5294cb27f8bb6b8d42d84a1640d5b5093b093b408af354b6145
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	00e8681df5f94       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   9 seconds ago       Running             kube-proxy                1                   3e9b11bc6a5ba       kube-proxy-wsgmd                            kube-system
	5dfc72bf385ca       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   9 seconds ago       Running             kindnet-cni               1                   45d1d8eb38376       kindnet-v4krt                               kube-system
	f34d65961eeb0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   20 seconds ago      Running             kube-apiserver            1                   9a4a7e572144e       kube-apiserver-newest-cni-988436            kube-system
	134549c8cebf1       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   20 seconds ago      Running             kube-scheduler            1                   f7eb990a0afe7       kube-scheduler-newest-cni-988436            kube-system
	edfddc0e3f8d1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   20 seconds ago      Running             kube-controller-manager   1                   a3c2aca9c783d       kube-controller-manager-newest-cni-988436   kube-system
	ef74cfcf93f82       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   20 seconds ago      Running             etcd                      1                   0d811681c1a68       etcd-newest-cni-988436                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-988436
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-988436
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81
	                    minikube.k8s.io/name=newest-cni-988436
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_06T19_57_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 06 Oct 2025 19:56:57 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-988436
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 06 Oct 2025 19:57:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 06 Oct 2025 19:57:29 +0000   Mon, 06 Oct 2025 19:56:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 06 Oct 2025 19:57:29 +0000   Mon, 06 Oct 2025 19:56:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 06 Oct 2025 19:57:29 +0000   Mon, 06 Oct 2025 19:56:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 06 Oct 2025 19:57:29 +0000   Mon, 06 Oct 2025 19:56:53 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-988436
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 2987c5e9cbd44110adb07016a441e3c4
	  System UUID:                073e7007-fd42-4128-9331-8ee710c3ffcc
	  Boot ID:                    28123a58-a718-41b5-bac7-da83ff2d3a0d
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-988436                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         40s
	  kube-system                 kindnet-v4krt                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      33s
	  kube-system                 kube-apiserver-newest-cni-988436             250m (12%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-newest-cni-988436    200m (10%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-wsgmd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-scheduler-newest-cni-988436             100m (5%)     0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 32s                kube-proxy       
	  Normal   Starting                 7s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  47s (x8 over 47s)  kubelet          Node newest-cni-988436 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 47s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 47s                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node newest-cni-988436 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     47s (x8 over 47s)  kubelet          Node newest-cni-988436 status is now: NodeHasSufficientPID
	  Normal   Starting                 39s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 39s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     38s                kubelet          Node newest-cni-988436 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    38s                kubelet          Node newest-cni-988436 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  38s                kubelet          Node newest-cni-988436 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           34s                node-controller  Node newest-cni-988436 event: Registered Node newest-cni-988436 in Controller
	  Normal   Starting                 21s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 21s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node newest-cni-988436 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node newest-cni-988436 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21s (x8 over 21s)  kubelet          Node newest-cni-988436 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6s                 node-controller  Node newest-cni-988436 event: Registered Node newest-cni-988436 in Controller
	
	
	==> dmesg <==
	[  +1.884868] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:28] overlayfs: idmapped layers are currently not supported
	[ +38.864395] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:29] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:30] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:31] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:35] overlayfs: idmapped layers are currently not supported
	[ +30.685217] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:37] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:39] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:41] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:48] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:49] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:50] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:52] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:54] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:55] overlayfs: idmapped layers are currently not supported
	[ +29.761517] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:56] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:57] overlayfs: idmapped layers are currently not supported
	[  +2.641672] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ef74cfcf93f82ed60cc7dc8b6e3da6bd2c5e5437877a94ad1f08dfe2f858aa0c] <==
	{"level":"warn","ts":"2025-10-06T19:57:25.839355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:25.916310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:25.988053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.068623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.093766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.168308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.208491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.257503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.336104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.383142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.400366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.453356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.509744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.562240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.630171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.778268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.831030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.968591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.016601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.067907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.112687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.132557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.153656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.196771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.287945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53130","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:57:39 up  1:39,  0 user,  load average: 5.30, 3.37, 2.35
	Linux newest-cni-988436 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5dfc72bf385ca72f9ef0e7c8285dbd5e03723e68c0e33f9d091cd08c682f638b] <==
	I1006 19:57:30.246721       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1006 19:57:30.246978       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1006 19:57:30.247073       1 main.go:148] setting mtu 1500 for CNI 
	I1006 19:57:30.247084       1 main.go:178] kindnetd IP family: "ipv4"
	I1006 19:57:30.247094       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-06T19:57:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1006 19:57:30.473887       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1006 19:57:30.473905       1 controller.go:381] "Waiting for informer caches to sync"
	I1006 19:57:30.473914       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1006 19:57:30.474191       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [f34d65961eeb0e039a90462d44820d5cb40290d6829def8b49b70d9904b4e966] <==
	I1006 19:57:29.239000       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1006 19:57:29.242481       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1006 19:57:29.254050       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1006 19:57:29.254147       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1006 19:57:29.254179       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1006 19:57:29.260013       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1006 19:57:29.260031       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1006 19:57:29.271655       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1006 19:57:29.273228       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1006 19:57:29.273707       1 aggregator.go:171] initial CRD sync complete...
	I1006 19:57:29.273795       1 autoregister_controller.go:144] Starting autoregister controller
	I1006 19:57:29.273825       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1006 19:57:29.275709       1 cache.go:39] Caches are synced for autoregister controller
	I1006 19:57:29.446887       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1006 19:57:29.455203       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1006 19:57:31.514676       1 controller.go:667] quota admission added evaluator for: namespaces
	I1006 19:57:31.622208       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1006 19:57:31.696272       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1006 19:57:31.727086       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1006 19:57:32.024648       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.144.3"}
	I1006 19:57:32.137189       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.218.23"}
	I1006 19:57:33.543988       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1006 19:57:33.591909       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1006 19:57:33.794671       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1006 19:57:33.847266       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [edfddc0e3f8d1c28acd50b5fa86583ed3677369806e4f0393d57c1cb3eba08dd] <==
	I1006 19:57:33.351748       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1006 19:57:33.354535       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1006 19:57:33.354567       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1006 19:57:33.354591       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1006 19:57:33.360387       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1006 19:57:33.382761       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1006 19:57:33.382844       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1006 19:57:33.385335       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1006 19:57:33.387841       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1006 19:57:33.389399       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1006 19:57:33.395829       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1006 19:57:33.395948       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1006 19:57:33.396014       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1006 19:57:33.396314       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1006 19:57:33.396788       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1006 19:57:33.404309       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 19:57:33.406728       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1006 19:57:33.425252       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1006 19:57:33.428821       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 19:57:33.428851       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1006 19:57:33.428859       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1006 19:57:33.431873       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1006 19:57:33.434061       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1006 19:57:33.449823       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1006 19:57:33.451940       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [00e8681df5f941509c743b69461398538f9a51c14d6c19d45910d346d99d8d58] <==
	I1006 19:57:31.268957       1 server_linux.go:53] "Using iptables proxy"
	I1006 19:57:31.373224       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1006 19:57:31.503050       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1006 19:57:31.503098       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1006 19:57:31.503170       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1006 19:57:31.693112       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 19:57:31.693167       1 server_linux.go:132] "Using iptables Proxier"
	I1006 19:57:31.762434       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1006 19:57:31.762806       1 server.go:527] "Version info" version="v1.34.1"
	I1006 19:57:31.762819       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:57:31.784514       1 config.go:200] "Starting service config controller"
	I1006 19:57:31.784539       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1006 19:57:31.784564       1 config.go:106] "Starting endpoint slice config controller"
	I1006 19:57:31.784569       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1006 19:57:31.784583       1 config.go:403] "Starting serviceCIDR config controller"
	I1006 19:57:31.784587       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1006 19:57:31.794089       1 config.go:309] "Starting node config controller"
	I1006 19:57:31.794117       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1006 19:57:31.794127       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1006 19:57:31.885574       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1006 19:57:31.885681       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1006 19:57:31.885761       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [134549c8cebf10c064c8c3afaf8ea9bc7932630b60156ffdc5ba7e9afcd15c21] <==
	I1006 19:57:25.616522       1 serving.go:386] Generated self-signed cert in-memory
	W1006 19:57:28.600104       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1006 19:57:28.600229       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1006 19:57:28.600266       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1006 19:57:28.600329       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1006 19:57:29.173708       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1006 19:57:29.173748       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:57:29.213591       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1006 19:57:29.213704       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:57:29.213720       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:57:29.213734       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1006 19:57:29.414154       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 06 19:57:23 newest-cni-988436 kubelet[729]: E1006 19:57:23.628338     729 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-988436\" not found" node="newest-cni-988436"
	Oct 06 19:57:28 newest-cni-988436 kubelet[729]: E1006 19:57:28.554898     729 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"newest-cni-988436\" not found"
	Oct 06 19:57:28 newest-cni-988436 kubelet[729]: I1006 19:57:28.723847     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-988436"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.330186     729 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-988436"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.330440     729 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-988436"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.330545     729 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: E1006 19:57:29.336319     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-988436\" already exists" pod="kube-system/kube-controller-manager-newest-cni-988436"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.336503     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-988436"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.337414     729 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: E1006 19:57:29.359052     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-988436\" already exists" pod="kube-system/kube-scheduler-newest-cni-988436"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.359234     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-988436"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.366632     729 apiserver.go:52] "Watching apiserver"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: E1006 19:57:29.396916     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-988436\" already exists" pod="kube-system/etcd-newest-cni-988436"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.397100     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-988436"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.408338     729 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.425070     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8b2c3ef8-c3bb-4e24-a72a-5a696590f257-cni-cfg\") pod \"kindnet-v4krt\" (UID: \"8b2c3ef8-c3bb-4e24-a72a-5a696590f257\") " pod="kube-system/kindnet-v4krt"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.425275     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b2c3ef8-c3bb-4e24-a72a-5a696590f257-xtables-lock\") pod \"kindnet-v4krt\" (UID: \"8b2c3ef8-c3bb-4e24-a72a-5a696590f257\") " pod="kube-system/kindnet-v4krt"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.425391     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2289712-8aa7-4ef1-909f-02322c74d8ee-xtables-lock\") pod \"kube-proxy-wsgmd\" (UID: \"b2289712-8aa7-4ef1-909f-02322c74d8ee\") " pod="kube-system/kube-proxy-wsgmd"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.425507     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2289712-8aa7-4ef1-909f-02322c74d8ee-lib-modules\") pod \"kube-proxy-wsgmd\" (UID: \"b2289712-8aa7-4ef1-909f-02322c74d8ee\") " pod="kube-system/kube-proxy-wsgmd"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.425639     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b2c3ef8-c3bb-4e24-a72a-5a696590f257-lib-modules\") pod \"kindnet-v4krt\" (UID: \"8b2c3ef8-c3bb-4e24-a72a-5a696590f257\") " pod="kube-system/kindnet-v4krt"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: E1006 19:57:29.457547     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-988436\" already exists" pod="kube-system/kube-apiserver-newest-cni-988436"
	Oct 06 19:57:29 newest-cni-988436 kubelet[729]: I1006 19:57:29.499204     729 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 06 19:57:34 newest-cni-988436 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 06 19:57:34 newest-cni-988436 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 06 19:57:34 newest-cni-988436 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-988436 -n newest-cni-988436
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-988436 -n newest-cni-988436: exit status 2 (447.803162ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-988436 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-z6drc storage-provisioner dashboard-metrics-scraper-6ffb444bf9-57k92 kubernetes-dashboard-855c9754f9-7h2mx
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-988436 describe pod coredns-66bc5c9577-z6drc storage-provisioner dashboard-metrics-scraper-6ffb444bf9-57k92 kubernetes-dashboard-855c9754f9-7h2mx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-988436 describe pod coredns-66bc5c9577-z6drc storage-provisioner dashboard-metrics-scraper-6ffb444bf9-57k92 kubernetes-dashboard-855c9754f9-7h2mx: exit status 1 (106.827625ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-z6drc" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-57k92" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-7h2mx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-988436 describe pod coredns-66bc5c9577-z6drc storage-provisioner dashboard-metrics-scraper-6ffb444bf9-57k92 kubernetes-dashboard-855c9754f9-7h2mx: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (7.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (8.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-997276 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-997276 --alsologtostderr -v=1: exit status 80 (2.619262852s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-997276 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 19:58:18.953913  221282 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:58:18.954094  221282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:58:18.954126  221282 out.go:374] Setting ErrFile to fd 2...
	I1006 19:58:18.954145  221282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:58:18.954475  221282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:58:18.954814  221282 out.go:368] Setting JSON to false
	I1006 19:58:18.954884  221282 mustload.go:65] Loading cluster: default-k8s-diff-port-997276
	I1006 19:58:18.955417  221282 config.go:182] Loaded profile config "default-k8s-diff-port-997276": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:58:18.955998  221282 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-997276 --format={{.State.Status}}
	I1006 19:58:18.974604  221282 host.go:66] Checking if "default-k8s-diff-port-997276" exists ...
	I1006 19:58:18.974921  221282 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:58:19.069931  221282 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-06 19:58:19.057563177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:58:19.070586  221282 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-997276 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1006 19:58:19.074001  221282 out.go:179] * Pausing node default-k8s-diff-port-997276 ... 
	I1006 19:58:19.076780  221282 host.go:66] Checking if "default-k8s-diff-port-997276" exists ...
	I1006 19:58:19.077135  221282 ssh_runner.go:195] Run: systemctl --version
	I1006 19:58:19.077185  221282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-997276
	I1006 19:58:19.108730  221282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/default-k8s-diff-port-997276/id_rsa Username:docker}
	I1006 19:58:19.215342  221282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:58:19.249438  221282 pause.go:51] kubelet running: true
	I1006 19:58:19.249508  221282 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1006 19:58:19.568142  221282 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1006 19:58:19.568271  221282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1006 19:58:19.643796  221282 cri.go:89] found id: "94543004a692d2869152c4706f66a2c0539fd2fc422c7b5955c377fa34be7e27"
	I1006 19:58:19.643819  221282 cri.go:89] found id: "7b0a7293bb03120d2db1b7f8559bcd7af73005b9a4cb29e752ff4eff8a6184d5"
	I1006 19:58:19.643825  221282 cri.go:89] found id: "d7f4a4e3f96ad1311da2b4b0839dfdde33ceb36b5b11c73ecf55feda067baebb"
	I1006 19:58:19.643829  221282 cri.go:89] found id: "021f8a3c8caf4e3062528b2830e6d3bd81c80b253a054789688df77fd6566fa9"
	I1006 19:58:19.643833  221282 cri.go:89] found id: "9f8260c3ac833cb980660b518720852ae3a00d4314fd06714ca37ecb0e765aa1"
	I1006 19:58:19.643839  221282 cri.go:89] found id: "69016b9f99d774f122c6a833714dc8aa9a9bed369630bb89cab483dc73bb2308"
	I1006 19:58:19.643842  221282 cri.go:89] found id: "d55af54186cd2219cd4bae48a2441c417921efdadcb8fc6384bac72028d350db"
	I1006 19:58:19.643845  221282 cri.go:89] found id: "e8361e8e1ee0632342d163d9478d496057250b3eeebfb0c28f6c1c0c8879ec24"
	I1006 19:58:19.643848  221282 cri.go:89] found id: "3ccfad32d3fc760df3c3a85997578af79aef13072ed1d94e610c623e692996c5"
	I1006 19:58:19.643854  221282 cri.go:89] found id: "f23c7b1e4e02b9f141c926732d0f2535e9f3243ba0bee45a42e1d11d5e2c978c"
	I1006 19:58:19.643858  221282 cri.go:89] found id: "4e61e37ab2416491e37ee9385b1a5a53f023ad0488c291c5826cce8b6d887514"
	I1006 19:58:19.643861  221282 cri.go:89] found id: ""
	I1006 19:58:19.643913  221282 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 19:58:19.656388  221282 retry.go:31] will retry after 191.154875ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:58:19Z" level=error msg="open /run/runc: no such file or directory"
	I1006 19:58:19.847834  221282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:58:19.863914  221282 pause.go:51] kubelet running: false
	I1006 19:58:19.863986  221282 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1006 19:58:20.100521  221282 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1006 19:58:20.100638  221282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1006 19:58:20.230579  221282 cri.go:89] found id: "94543004a692d2869152c4706f66a2c0539fd2fc422c7b5955c377fa34be7e27"
	I1006 19:58:20.230644  221282 cri.go:89] found id: "7b0a7293bb03120d2db1b7f8559bcd7af73005b9a4cb29e752ff4eff8a6184d5"
	I1006 19:58:20.230664  221282 cri.go:89] found id: "d7f4a4e3f96ad1311da2b4b0839dfdde33ceb36b5b11c73ecf55feda067baebb"
	I1006 19:58:20.230683  221282 cri.go:89] found id: "021f8a3c8caf4e3062528b2830e6d3bd81c80b253a054789688df77fd6566fa9"
	I1006 19:58:20.230699  221282 cri.go:89] found id: "9f8260c3ac833cb980660b518720852ae3a00d4314fd06714ca37ecb0e765aa1"
	I1006 19:58:20.230716  221282 cri.go:89] found id: "69016b9f99d774f122c6a833714dc8aa9a9bed369630bb89cab483dc73bb2308"
	I1006 19:58:20.230735  221282 cri.go:89] found id: "d55af54186cd2219cd4bae48a2441c417921efdadcb8fc6384bac72028d350db"
	I1006 19:58:20.230752  221282 cri.go:89] found id: "e8361e8e1ee0632342d163d9478d496057250b3eeebfb0c28f6c1c0c8879ec24"
	I1006 19:58:20.230768  221282 cri.go:89] found id: "3ccfad32d3fc760df3c3a85997578af79aef13072ed1d94e610c623e692996c5"
	I1006 19:58:20.230801  221282 cri.go:89] found id: "f23c7b1e4e02b9f141c926732d0f2535e9f3243ba0bee45a42e1d11d5e2c978c"
	I1006 19:58:20.230826  221282 cri.go:89] found id: "4e61e37ab2416491e37ee9385b1a5a53f023ad0488c291c5826cce8b6d887514"
	I1006 19:58:20.230844  221282 cri.go:89] found id: ""
	I1006 19:58:20.230909  221282 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 19:58:20.252543  221282 retry.go:31] will retry after 369.365918ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:58:20Z" level=error msg="open /run/runc: no such file or directory"
	I1006 19:58:20.622085  221282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:58:20.636149  221282 pause.go:51] kubelet running: false
	I1006 19:58:20.636279  221282 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1006 19:58:20.804563  221282 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1006 19:58:20.804639  221282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1006 19:58:20.889709  221282 cri.go:89] found id: "94543004a692d2869152c4706f66a2c0539fd2fc422c7b5955c377fa34be7e27"
	I1006 19:58:20.889733  221282 cri.go:89] found id: "7b0a7293bb03120d2db1b7f8559bcd7af73005b9a4cb29e752ff4eff8a6184d5"
	I1006 19:58:20.889739  221282 cri.go:89] found id: "d7f4a4e3f96ad1311da2b4b0839dfdde33ceb36b5b11c73ecf55feda067baebb"
	I1006 19:58:20.889743  221282 cri.go:89] found id: "021f8a3c8caf4e3062528b2830e6d3bd81c80b253a054789688df77fd6566fa9"
	I1006 19:58:20.889746  221282 cri.go:89] found id: "9f8260c3ac833cb980660b518720852ae3a00d4314fd06714ca37ecb0e765aa1"
	I1006 19:58:20.889750  221282 cri.go:89] found id: "69016b9f99d774f122c6a833714dc8aa9a9bed369630bb89cab483dc73bb2308"
	I1006 19:58:20.889753  221282 cri.go:89] found id: "d55af54186cd2219cd4bae48a2441c417921efdadcb8fc6384bac72028d350db"
	I1006 19:58:20.889756  221282 cri.go:89] found id: "e8361e8e1ee0632342d163d9478d496057250b3eeebfb0c28f6c1c0c8879ec24"
	I1006 19:58:20.889759  221282 cri.go:89] found id: "3ccfad32d3fc760df3c3a85997578af79aef13072ed1d94e610c623e692996c5"
	I1006 19:58:20.889764  221282 cri.go:89] found id: "f23c7b1e4e02b9f141c926732d0f2535e9f3243ba0bee45a42e1d11d5e2c978c"
	I1006 19:58:20.889768  221282 cri.go:89] found id: "4e61e37ab2416491e37ee9385b1a5a53f023ad0488c291c5826cce8b6d887514"
	I1006 19:58:20.889771  221282 cri.go:89] found id: ""
	I1006 19:58:20.889819  221282 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 19:58:20.901565  221282 retry.go:31] will retry after 301.336341ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:58:20Z" level=error msg="open /run/runc: no such file or directory"
	I1006 19:58:21.203905  221282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:58:21.218112  221282 pause.go:51] kubelet running: false
	I1006 19:58:21.218214  221282 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1006 19:58:21.391055  221282 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1006 19:58:21.391133  221282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1006 19:58:21.461569  221282 cri.go:89] found id: "94543004a692d2869152c4706f66a2c0539fd2fc422c7b5955c377fa34be7e27"
	I1006 19:58:21.461593  221282 cri.go:89] found id: "7b0a7293bb03120d2db1b7f8559bcd7af73005b9a4cb29e752ff4eff8a6184d5"
	I1006 19:58:21.461599  221282 cri.go:89] found id: "d7f4a4e3f96ad1311da2b4b0839dfdde33ceb36b5b11c73ecf55feda067baebb"
	I1006 19:58:21.461603  221282 cri.go:89] found id: "021f8a3c8caf4e3062528b2830e6d3bd81c80b253a054789688df77fd6566fa9"
	I1006 19:58:21.461606  221282 cri.go:89] found id: "9f8260c3ac833cb980660b518720852ae3a00d4314fd06714ca37ecb0e765aa1"
	I1006 19:58:21.461610  221282 cri.go:89] found id: "69016b9f99d774f122c6a833714dc8aa9a9bed369630bb89cab483dc73bb2308"
	I1006 19:58:21.461613  221282 cri.go:89] found id: "d55af54186cd2219cd4bae48a2441c417921efdadcb8fc6384bac72028d350db"
	I1006 19:58:21.461624  221282 cri.go:89] found id: "e8361e8e1ee0632342d163d9478d496057250b3eeebfb0c28f6c1c0c8879ec24"
	I1006 19:58:21.461628  221282 cri.go:89] found id: "3ccfad32d3fc760df3c3a85997578af79aef13072ed1d94e610c623e692996c5"
	I1006 19:58:21.461634  221282 cri.go:89] found id: "f23c7b1e4e02b9f141c926732d0f2535e9f3243ba0bee45a42e1d11d5e2c978c"
	I1006 19:58:21.461637  221282 cri.go:89] found id: "4e61e37ab2416491e37ee9385b1a5a53f023ad0488c291c5826cce8b6d887514"
	I1006 19:58:21.461640  221282 cri.go:89] found id: ""
	I1006 19:58:21.461690  221282 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 19:58:21.481323  221282 out.go:203] 
	W1006 19:58:21.484279  221282 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:58:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T19:58:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1006 19:58:21.484339  221282 out.go:285] * 
	* 
	W1006 19:58:21.489216  221282 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 19:58:21.492804  221282 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-997276 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-997276
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-997276:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4fc3831db94848cd06748cc5e8c8533f2de0215c11120be2aa7b676a4a1c113b",
	        "Created": "2025-10-06T19:55:30.333531639Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 214640,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T19:57:12.741142127Z",
	            "FinishedAt": "2025-10-06T19:57:11.941718673Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/4fc3831db94848cd06748cc5e8c8533f2de0215c11120be2aa7b676a4a1c113b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4fc3831db94848cd06748cc5e8c8533f2de0215c11120be2aa7b676a4a1c113b/hostname",
	        "HostsPath": "/var/lib/docker/containers/4fc3831db94848cd06748cc5e8c8533f2de0215c11120be2aa7b676a4a1c113b/hosts",
	        "LogPath": "/var/lib/docker/containers/4fc3831db94848cd06748cc5e8c8533f2de0215c11120be2aa7b676a4a1c113b/4fc3831db94848cd06748cc5e8c8533f2de0215c11120be2aa7b676a4a1c113b-json.log",
	        "Name": "/default-k8s-diff-port-997276",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-997276:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-997276",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4fc3831db94848cd06748cc5e8c8533f2de0215c11120be2aa7b676a4a1c113b",
	                "LowerDir": "/var/lib/docker/overlay2/ca41d0f8b7cb8f1b3af001a8f5128ec86c614fe6ad93ed4708e799d1261586b4-init/diff:/var/lib/docker/overlay2/950dad8a7486c25883a9845454dded3949e8f3a53166d005e6749ff210bcab80/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ca41d0f8b7cb8f1b3af001a8f5128ec86c614fe6ad93ed4708e799d1261586b4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ca41d0f8b7cb8f1b3af001a8f5128ec86c614fe6ad93ed4708e799d1261586b4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ca41d0f8b7cb8f1b3af001a8f5128ec86c614fe6ad93ed4708e799d1261586b4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-997276",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-997276/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-997276",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-997276",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-997276",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "549805a326b489bff8ea6bee639dadd07fdbb2a4809f0ebf5ec6f38cdf6f3638",
	            "SandboxKey": "/var/run/docker/netns/549805a326b4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-997276": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:90:90:ee:3c:00",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2e10a72004c0565fee9f56eb617f1837118ee48bf9bd5cadbc46998fb4ed527c",
	                    "EndpointID": "cd09782053ae9462a23480eef241c52360d5456bbde72773d527b29fa9d89acb",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-997276",
	                        "4fc3831db948"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-997276 -n default-k8s-diff-port-997276
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-997276 -n default-k8s-diff-port-997276: exit status 2 (396.857323ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-997276 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-997276 logs -n 25: (1.57376318s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-830393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:56 UTC │
	│ delete  │ -p no-preload-314275                                                                                                                                                                                                                          │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ delete  │ -p no-preload-314275                                                                                                                                                                                                                          │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ delete  │ -p disable-driver-mounts-932453                                                                                                                                                                                                               │ disable-driver-mounts-932453 │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ start   │ -p default-k8s-diff-port-997276 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:56 UTC │
	│ image   │ embed-certs-830393 image list --format=json                                                                                                                                                                                                   │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │ 06 Oct 25 19:56 UTC │
	│ pause   │ -p embed-certs-830393 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │                     │
	│ delete  │ -p embed-certs-830393                                                                                                                                                                                                                         │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │ 06 Oct 25 19:56 UTC │
	│ delete  │ -p embed-certs-830393                                                                                                                                                                                                                         │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │ 06 Oct 25 19:56 UTC │
	│ start   │ -p newest-cni-988436 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │ 06 Oct 25 19:57 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-997276 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-997276 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:57 UTC │
	│ addons  │ enable metrics-server -p newest-cni-988436 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │                     │
	│ stop    │ -p newest-cni-988436 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:57 UTC │
	│ addons  │ enable dashboard -p newest-cni-988436 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:57 UTC │
	│ start   │ -p newest-cni-988436 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:57 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-997276 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:57 UTC │
	│ start   │ -p default-k8s-diff-port-997276 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:58 UTC │
	│ image   │ newest-cni-988436 image list --format=json                                                                                                                                                                                                    │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:57 UTC │
	│ pause   │ -p newest-cni-988436 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │                     │
	│ delete  │ -p newest-cni-988436                                                                                                                                                                                                                          │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:57 UTC │
	│ delete  │ -p newest-cni-988436                                                                                                                                                                                                                          │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:57 UTC │
	│ start   │ -p auto-053944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-053944                  │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │                     │
	│ image   │ default-k8s-diff-port-997276 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:58 UTC │ 06 Oct 25 19:58 UTC │
	│ pause   │ -p default-k8s-diff-port-997276 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 19:57:43
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 19:57:43.816963  218923 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:57:43.817187  218923 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:57:43.817215  218923 out.go:374] Setting ErrFile to fd 2...
	I1006 19:57:43.817235  218923 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:57:43.817562  218923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:57:43.818031  218923 out.go:368] Setting JSON to false
	I1006 19:57:43.821074  218923 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5999,"bootTime":1759774665,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1006 19:57:43.821190  218923 start.go:140] virtualization:  
	I1006 19:57:43.825301  218923 out.go:179] * [auto-053944] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 19:57:43.830952  218923 notify.go:220] Checking for updates...
	I1006 19:57:43.833805  218923 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 19:57:43.836959  218923 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 19:57:43.840156  218923 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:57:43.844222  218923 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	I1006 19:57:43.847030  218923 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 19:57:43.850090  218923 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 19:57:43.853928  218923 config.go:182] Loaded profile config "default-k8s-diff-port-997276": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:57:43.854115  218923 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 19:57:43.884767  218923 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 19:57:43.884958  218923 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:57:44.016936  218923 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:57:44.003015435 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:57:44.017201  218923 docker.go:318] overlay module found
	I1006 19:57:44.020706  218923 out.go:179] * Using the docker driver based on user configuration
	I1006 19:57:44.027543  218923 start.go:304] selected driver: docker
	I1006 19:57:44.027572  218923 start.go:924] validating driver "docker" against <nil>
	I1006 19:57:44.027587  218923 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 19:57:44.028568  218923 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:57:44.146328  218923 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:57:44.137706405 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:57:44.146483  218923 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 19:57:44.146704  218923 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 19:57:44.149840  218923 out.go:179] * Using Docker driver with root privileges
	I1006 19:57:44.153075  218923 cni.go:84] Creating CNI manager for ""
	I1006 19:57:44.153148  218923 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:57:44.153156  218923 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 19:57:44.153236  218923 start.go:348] cluster config:
	{Name:auto-053944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-053944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1006 19:57:44.156837  218923 out.go:179] * Starting "auto-053944" primary control-plane node in "auto-053944" cluster
	I1006 19:57:44.159978  218923 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 19:57:44.163111  218923 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 19:57:44.165963  218923 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:57:44.166116  218923 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1006 19:57:44.166136  218923 cache.go:58] Caching tarball of preloaded images
	I1006 19:57:44.166453  218923 preload.go:233] Found /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1006 19:57:44.166466  218923 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 19:57:44.166576  218923 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/config.json ...
	I1006 19:57:44.166599  218923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/config.json: {Name:mk475550cdb661222b5a12bc2da86a7ec1e44c49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:57:44.166754  218923 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 19:57:44.194146  218923 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 19:57:44.194192  218923 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 19:57:44.194206  218923 cache.go:232] Successfully downloaded all kic artifacts
	I1006 19:57:44.194228  218923 start.go:360] acquireMachinesLock for auto-053944: {Name:mk39469c2dc6ed40f3259891729e63ae3e1e557a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 19:57:44.194336  218923 start.go:364] duration metric: took 93.352µs to acquireMachinesLock for "auto-053944"
	I1006 19:57:44.194360  218923 start.go:93] Provisioning new machine with config: &{Name:auto-053944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-053944 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 19:57:44.194419  218923 start.go:125] createHost starting for "" (driver="docker")
	W1006 19:57:42.466618  214483 pod_ready.go:104] pod "coredns-66bc5c9577-bns67" is not "Ready", error: <nil>
	W1006 19:57:44.475252  214483 pod_ready.go:104] pod "coredns-66bc5c9577-bns67" is not "Ready", error: <nil>
	W1006 19:57:46.967076  214483 pod_ready.go:104] pod "coredns-66bc5c9577-bns67" is not "Ready", error: <nil>
	I1006 19:57:44.198147  218923 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1006 19:57:44.198409  218923 start.go:159] libmachine.API.Create for "auto-053944" (driver="docker")
	I1006 19:57:44.198445  218923 client.go:168] LocalClient.Create starting
	I1006 19:57:44.198519  218923 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem
	I1006 19:57:44.198554  218923 main.go:141] libmachine: Decoding PEM data...
	I1006 19:57:44.198567  218923 main.go:141] libmachine: Parsing certificate...
	I1006 19:57:44.198616  218923 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem
	I1006 19:57:44.198634  218923 main.go:141] libmachine: Decoding PEM data...
	I1006 19:57:44.198644  218923 main.go:141] libmachine: Parsing certificate...
	I1006 19:57:44.198999  218923 cli_runner.go:164] Run: docker network inspect auto-053944 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 19:57:44.223325  218923 cli_runner.go:211] docker network inspect auto-053944 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 19:57:44.223413  218923 network_create.go:284] running [docker network inspect auto-053944] to gather additional debugging logs...
	I1006 19:57:44.223428  218923 cli_runner.go:164] Run: docker network inspect auto-053944
	W1006 19:57:44.248876  218923 cli_runner.go:211] docker network inspect auto-053944 returned with exit code 1
	I1006 19:57:44.248901  218923 network_create.go:287] error running [docker network inspect auto-053944]: docker network inspect auto-053944: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-053944 not found
	I1006 19:57:44.248913  218923 network_create.go:289] output of [docker network inspect auto-053944]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-053944 not found
	
	** /stderr **
	I1006 19:57:44.249022  218923 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:57:44.288345  218923 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7058eae896da IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:57:c0:bd:de:1b} reservation:<nil>}
	I1006 19:57:44.288675  218923 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e321ce8cf6dc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:12:e6:76:87:fe:bb} reservation:<nil>}
	I1006 19:57:44.289104  218923 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-17bdbd7bd7be IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:22:93:f3:19:55:10} reservation:<nil>}
	I1006 19:57:44.289370  218923 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-2e10a72004c0 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:16:2c:92:d4:96:5e} reservation:<nil>}
	I1006 19:57:44.289745  218923 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a35510}
	I1006 19:57:44.289765  218923 network_create.go:124] attempt to create docker network auto-053944 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1006 19:57:44.289825  218923 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-053944 auto-053944
	I1006 19:57:44.369041  218923 network_create.go:108] docker network auto-053944 192.168.85.0/24 created
	I1006 19:57:44.369069  218923 kic.go:121] calculated static IP "192.168.85.2" for the "auto-053944" container
	I1006 19:57:44.369142  218923 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 19:57:44.391791  218923 cli_runner.go:164] Run: docker volume create auto-053944 --label name.minikube.sigs.k8s.io=auto-053944 --label created_by.minikube.sigs.k8s.io=true
	I1006 19:57:44.427781  218923 oci.go:103] Successfully created a docker volume auto-053944
	I1006 19:57:44.427875  218923 cli_runner.go:164] Run: docker run --rm --name auto-053944-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-053944 --entrypoint /usr/bin/test -v auto-053944:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 19:57:45.250602  218923 oci.go:107] Successfully prepared a docker volume auto-053944
	I1006 19:57:45.250649  218923 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:57:45.250673  218923 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 19:57:45.250749  218923 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-053944:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	W1006 19:57:49.467247  214483 pod_ready.go:104] pod "coredns-66bc5c9577-bns67" is not "Ready", error: <nil>
	W1006 19:57:51.467819  214483 pod_ready.go:104] pod "coredns-66bc5c9577-bns67" is not "Ready", error: <nil>
	I1006 19:57:50.390123  218923 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-053944:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (5.139334705s)
	I1006 19:57:50.390152  218923 kic.go:203] duration metric: took 5.139476246s to extract preloaded images to volume ...
	W1006 19:57:50.390283  218923 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1006 19:57:50.390383  218923 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 19:57:50.501496  218923 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-053944 --name auto-053944 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-053944 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-053944 --network auto-053944 --ip 192.168.85.2 --volume auto-053944:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 19:57:50.849869  218923 cli_runner.go:164] Run: docker container inspect auto-053944 --format={{.State.Running}}
	I1006 19:57:50.873557  218923 cli_runner.go:164] Run: docker container inspect auto-053944 --format={{.State.Status}}
	I1006 19:57:50.899329  218923 cli_runner.go:164] Run: docker exec auto-053944 stat /var/lib/dpkg/alternatives/iptables
	I1006 19:57:50.958456  218923 oci.go:144] the created container "auto-053944" has a running status.
	I1006 19:57:50.958493  218923 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/auto-053944/id_rsa...
	I1006 19:57:52.220062  218923 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-2540/.minikube/machines/auto-053944/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 19:57:52.240655  218923 cli_runner.go:164] Run: docker container inspect auto-053944 --format={{.State.Status}}
	I1006 19:57:52.257944  218923 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 19:57:52.257972  218923 kic_runner.go:114] Args: [docker exec --privileged auto-053944 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 19:57:52.298709  218923 cli_runner.go:164] Run: docker container inspect auto-053944 --format={{.State.Status}}
	I1006 19:57:52.317476  218923 machine.go:93] provisionDockerMachine start ...
	I1006 19:57:52.317568  218923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-053944
	I1006 19:57:52.351065  218923 main.go:141] libmachine: Using SSH client type: native
	I1006 19:57:52.351517  218923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33100 <nil> <nil>}
	I1006 19:57:52.351529  218923 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 19:57:52.352807  218923 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	W1006 19:57:53.965541  214483 pod_ready.go:104] pod "coredns-66bc5c9577-bns67" is not "Ready", error: <nil>
	W1006 19:57:55.965625  214483 pod_ready.go:104] pod "coredns-66bc5c9577-bns67" is not "Ready", error: <nil>
	I1006 19:57:55.491376  218923 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-053944
	
	I1006 19:57:55.491401  218923 ubuntu.go:182] provisioning hostname "auto-053944"
	I1006 19:57:55.491470  218923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-053944
	I1006 19:57:55.509159  218923 main.go:141] libmachine: Using SSH client type: native
	I1006 19:57:55.509477  218923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33100 <nil> <nil>}
	I1006 19:57:55.509493  218923 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-053944 && echo "auto-053944" | sudo tee /etc/hostname
	I1006 19:57:55.658382  218923 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-053944
	
	I1006 19:57:55.658466  218923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-053944
	I1006 19:57:55.678044  218923 main.go:141] libmachine: Using SSH client type: native
	I1006 19:57:55.678373  218923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33100 <nil> <nil>}
	I1006 19:57:55.678395  218923 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-053944' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-053944/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-053944' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 19:57:55.814596  218923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 19:57:55.814623  218923 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-2540/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-2540/.minikube}
	I1006 19:57:55.814681  218923 ubuntu.go:190] setting up certificates
	I1006 19:57:55.814701  218923 provision.go:84] configureAuth start
	I1006 19:57:55.814777  218923 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-053944
	I1006 19:57:55.832054  218923 provision.go:143] copyHostCerts
	I1006 19:57:55.832129  218923 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem, removing ...
	I1006 19:57:55.832165  218923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem
	I1006 19:57:55.832248  218923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem (1082 bytes)
	I1006 19:57:55.832347  218923 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem, removing ...
	I1006 19:57:55.832358  218923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem
	I1006 19:57:55.832385  218923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem (1123 bytes)
	I1006 19:57:55.832442  218923 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem, removing ...
	I1006 19:57:55.832451  218923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem
	I1006 19:57:55.832474  218923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem (1675 bytes)
	I1006 19:57:55.832549  218923 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem org=jenkins.auto-053944 san=[127.0.0.1 192.168.85.2 auto-053944 localhost minikube]
	I1006 19:57:56.709165  218923 provision.go:177] copyRemoteCerts
	I1006 19:57:56.709238  218923 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 19:57:56.709285  218923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-053944
	I1006 19:57:56.729786  218923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/auto-053944/id_rsa Username:docker}
	I1006 19:57:56.827537  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 19:57:56.847149  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1006 19:57:56.865493  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 19:57:56.883903  218923 provision.go:87] duration metric: took 1.069180531s to configureAuth
	I1006 19:57:56.883934  218923 ubuntu.go:206] setting minikube options for container-runtime
	I1006 19:57:56.884132  218923 config.go:182] Loaded profile config "auto-053944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:57:56.884262  218923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-053944
	I1006 19:57:56.901607  218923 main.go:141] libmachine: Using SSH client type: native
	I1006 19:57:56.901913  218923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33100 <nil> <nil>}
	I1006 19:57:56.901934  218923 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 19:57:57.161883  218923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 19:57:57.161909  218923 machine.go:96] duration metric: took 4.8444138s to provisionDockerMachine
	I1006 19:57:57.161918  218923 client.go:171] duration metric: took 12.963467524s to LocalClient.Create
	I1006 19:57:57.161931  218923 start.go:167] duration metric: took 12.96352451s to libmachine.API.Create "auto-053944"
	I1006 19:57:57.161937  218923 start.go:293] postStartSetup for "auto-053944" (driver="docker")
	I1006 19:57:57.161946  218923 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 19:57:57.162009  218923 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 19:57:57.162067  218923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-053944
	I1006 19:57:57.189437  218923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/auto-053944/id_rsa Username:docker}
	I1006 19:57:57.287588  218923 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 19:57:57.290825  218923 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 19:57:57.290852  218923 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 19:57:57.290864  218923 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/addons for local assets ...
	I1006 19:57:57.290918  218923 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/files for local assets ...
	I1006 19:57:57.291011  218923 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> 43502.pem in /etc/ssl/certs
	I1006 19:57:57.291120  218923 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 19:57:57.298467  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:57:57.316403  218923 start.go:296] duration metric: took 154.452313ms for postStartSetup
	I1006 19:57:57.316777  218923 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-053944
	I1006 19:57:57.338435  218923 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/config.json ...
	I1006 19:57:57.338735  218923 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 19:57:57.338790  218923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-053944
	I1006 19:57:57.356364  218923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/auto-053944/id_rsa Username:docker}
	I1006 19:57:57.448697  218923 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 19:57:57.453787  218923 start.go:128] duration metric: took 13.259354883s to createHost
	I1006 19:57:57.453813  218923 start.go:83] releasing machines lock for "auto-053944", held for 13.259468542s
	I1006 19:57:57.453884  218923 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-053944
	I1006 19:57:57.472900  218923 ssh_runner.go:195] Run: cat /version.json
	I1006 19:57:57.472923  218923 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 19:57:57.472953  218923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-053944
	I1006 19:57:57.472989  218923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-053944
	I1006 19:57:57.494809  218923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/auto-053944/id_rsa Username:docker}
	I1006 19:57:57.495883  218923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/auto-053944/id_rsa Username:docker}
	I1006 19:57:57.591285  218923 ssh_runner.go:195] Run: systemctl --version
	I1006 19:57:57.680041  218923 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 19:57:57.717917  218923 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 19:57:57.722312  218923 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 19:57:57.722382  218923 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 19:57:57.750875  218923 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1006 19:57:57.750896  218923 start.go:495] detecting cgroup driver to use...
	I1006 19:57:57.750928  218923 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 19:57:57.750994  218923 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 19:57:57.768047  218923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 19:57:57.781455  218923 docker.go:218] disabling cri-docker service (if available) ...
	I1006 19:57:57.781526  218923 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 19:57:57.801040  218923 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 19:57:57.822720  218923 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 19:57:57.943479  218923 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 19:57:58.078115  218923 docker.go:234] disabling docker service ...
	I1006 19:57:58.078197  218923 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 19:57:58.103016  218923 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 19:57:58.117098  218923 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 19:57:58.239446  218923 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 19:57:58.367865  218923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 19:57:58.381209  218923 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 19:57:58.397098  218923 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 19:57:58.397219  218923 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:58.405952  218923 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 19:57:58.406052  218923 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:58.415507  218923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:58.425228  218923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:58.434240  218923 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 19:57:58.442725  218923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:58.452880  218923 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:58.470392  218923 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:58.479948  218923 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 19:57:58.488386  218923 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 19:57:58.495950  218923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:57:58.612206  218923 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 19:57:58.746993  218923 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 19:57:58.747098  218923 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 19:57:58.751167  218923 start.go:563] Will wait 60s for crictl version
	I1006 19:57:58.751276  218923 ssh_runner.go:195] Run: which crictl
	I1006 19:57:58.754987  218923 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 19:57:58.788087  218923 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 19:57:58.788199  218923 ssh_runner.go:195] Run: crio --version
	I1006 19:57:58.830668  218923 ssh_runner.go:195] Run: crio --version
	I1006 19:57:58.867362  218923 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 19:57:58.870144  218923 cli_runner.go:164] Run: docker network inspect auto-053944 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:57:58.887128  218923 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1006 19:57:58.891235  218923 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:57:58.901634  218923 kubeadm.go:883] updating cluster {Name:auto-053944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-053944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 19:57:58.901749  218923 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:57:58.901815  218923 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:57:58.948930  218923 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:57:58.948955  218923 crio.go:433] Images already preloaded, skipping extraction
	I1006 19:57:58.949018  218923 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:57:58.981424  218923 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:57:58.981455  218923 cache_images.go:85] Images are preloaded, skipping loading
	I1006 19:57:58.981463  218923 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1006 19:57:58.981544  218923 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-053944 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-053944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 19:57:58.981643  218923 ssh_runner.go:195] Run: crio config
	I1006 19:57:59.057069  218923 cni.go:84] Creating CNI manager for ""
	I1006 19:57:59.057095  218923 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:57:59.057109  218923 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 19:57:59.057161  218923 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-053944 NodeName:auto-053944 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 19:57:59.057337  218923 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-053944"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 19:57:59.057432  218923 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 19:57:59.065760  218923 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 19:57:59.065857  218923 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 19:57:59.073834  218923 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1006 19:57:59.087463  218923 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 19:57:59.100737  218923 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1006 19:57:59.114683  218923 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1006 19:57:59.118476  218923 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:57:59.128523  218923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:57:59.246018  218923 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:57:59.263293  218923 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944 for IP: 192.168.85.2
	I1006 19:57:59.263364  218923 certs.go:195] generating shared ca certs ...
	I1006 19:57:59.263400  218923 certs.go:227] acquiring lock for ca certs: {Name:mke29bef1b13829052576090d66d8864f7cbc64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:57:59.263569  218923 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key
	I1006 19:57:59.263648  218923 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key
	I1006 19:57:59.263671  218923 certs.go:257] generating profile certs ...
	I1006 19:57:59.263795  218923 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/client.key
	I1006 19:57:59.263848  218923 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/client.crt with IP's: []
	I1006 19:57:59.922101  218923 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/client.crt ...
	I1006 19:57:59.922139  218923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/client.crt: {Name:mk9a4a220a47ff3ca80c57e982cfdac4ebcba118 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:57:59.922341  218923 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/client.key ...
	I1006 19:57:59.922355  218923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/client.key: {Name:mkf958b4adec13e459bb8782b35d81ceacb5ac4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:57:59.922449  218923 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/apiserver.key.77e7bfad
	I1006 19:57:59.922464  218923 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/apiserver.crt.77e7bfad with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1006 19:58:01.035790  218923 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/apiserver.crt.77e7bfad ...
	I1006 19:58:01.035865  218923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/apiserver.crt.77e7bfad: {Name:mk8ff224b85267052de956b9f235788db252faa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:58:01.036193  218923 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/apiserver.key.77e7bfad ...
	I1006 19:58:01.036237  218923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/apiserver.key.77e7bfad: {Name:mkd1c5846047e929423b81ed27ea8d0d62dfd78e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:58:01.036367  218923 certs.go:382] copying /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/apiserver.crt.77e7bfad -> /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/apiserver.crt
	I1006 19:58:01.036490  218923 certs.go:386] copying /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/apiserver.key.77e7bfad -> /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/apiserver.key
	I1006 19:58:01.036594  218923 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/proxy-client.key
	I1006 19:58:01.036650  218923 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/proxy-client.crt with IP's: []
	I1006 19:58:01.535938  218923 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/proxy-client.crt ...
	I1006 19:58:01.535975  218923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/proxy-client.crt: {Name:mk4a9b87970e1b28b78560b009769c5b9b6d281d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:58:01.536170  218923 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/proxy-client.key ...
	I1006 19:58:01.536184  218923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/proxy-client.key: {Name:mkfaaa9b5cfbe207b7f35dfe33c3c1dc7ec4f2a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:58:01.536396  218923 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem (1338 bytes)
	W1006 19:58:01.536449  218923 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350_empty.pem, impossibly tiny 0 bytes
	I1006 19:58:01.536461  218923 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 19:58:01.536489  218923 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem (1082 bytes)
	I1006 19:58:01.536515  218923 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem (1123 bytes)
	I1006 19:58:01.536540  218923 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem (1675 bytes)
	I1006 19:58:01.536587  218923 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:58:01.537208  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 19:58:01.556186  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 19:58:01.578013  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 19:58:01.597241  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 19:58:01.614965  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1006 19:58:01.633980  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 19:58:01.653720  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 19:58:01.673550  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 19:58:01.692539  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem --> /usr/share/ca-certificates/4350.pem (1338 bytes)
	I1006 19:58:01.713171  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /usr/share/ca-certificates/43502.pem (1708 bytes)
	I1006 19:58:01.732152  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 19:58:01.751469  218923 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 19:58:01.765178  218923 ssh_runner.go:195] Run: openssl version
	I1006 19:58:01.771623  218923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 19:58:01.781663  218923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:58:01.786199  218923 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:58:01.786286  218923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:58:01.839231  218923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 19:58:01.848243  218923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4350.pem && ln -fs /usr/share/ca-certificates/4350.pem /etc/ssl/certs/4350.pem"
	I1006 19:58:01.856932  218923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4350.pem
	I1006 19:58:01.860883  218923 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 18:49 /usr/share/ca-certificates/4350.pem
	I1006 19:58:01.860957  218923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4350.pem
	I1006 19:58:01.902504  218923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4350.pem /etc/ssl/certs/51391683.0"
	I1006 19:58:01.910917  218923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43502.pem && ln -fs /usr/share/ca-certificates/43502.pem /etc/ssl/certs/43502.pem"
	I1006 19:58:01.919432  218923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43502.pem
	I1006 19:58:01.923324  218923 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 18:49 /usr/share/ca-certificates/43502.pem
	I1006 19:58:01.923392  218923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43502.pem
	I1006 19:58:01.971041  218923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43502.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 19:58:01.981133  218923 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 19:58:01.985133  218923 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 19:58:01.985185  218923 kubeadm.go:400] StartCluster: {Name:auto-053944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-053944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:58:01.985269  218923 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 19:58:01.985344  218923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 19:58:02.013783  218923 cri.go:89] found id: ""
	I1006 19:58:02.013930  218923 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 19:58:02.032994  218923 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 19:58:02.042155  218923 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 19:58:02.042224  218923 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 19:58:02.051835  218923 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 19:58:02.051900  218923 kubeadm.go:157] found existing configuration files:
	
	I1006 19:58:02.051982  218923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 19:58:02.060507  218923 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 19:58:02.060619  218923 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 19:58:02.068473  218923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 19:58:02.078800  218923 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 19:58:02.078893  218923 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 19:58:02.086842  218923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 19:58:02.095029  218923 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 19:58:02.095126  218923 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 19:58:02.108239  218923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 19:58:02.117648  218923 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 19:58:02.117753  218923 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 19:58:02.125851  218923 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 19:58:02.166460  218923 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 19:58:02.166604  218923 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 19:58:02.193510  218923 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 19:58:02.193610  218923 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1006 19:58:02.193667  218923 kubeadm.go:318] OS: Linux
	I1006 19:58:02.193730  218923 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 19:58:02.193799  218923 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1006 19:58:02.193866  218923 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 19:58:02.193936  218923 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 19:58:02.194021  218923 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 19:58:02.194089  218923 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 19:58:02.194150  218923 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 19:58:02.194215  218923 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 19:58:02.194280  218923 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1006 19:58:02.265437  218923 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 19:58:02.265621  218923 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 19:58:02.265785  218923 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 19:58:02.273538  218923 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1006 19:57:57.968469  214483 pod_ready.go:104] pod "coredns-66bc5c9577-bns67" is not "Ready", error: <nil>
	W1006 19:57:59.968739  214483 pod_ready.go:104] pod "coredns-66bc5c9577-bns67" is not "Ready", error: <nil>
	I1006 19:58:02.279476  218923 out.go:252]   - Generating certificates and keys ...
	I1006 19:58:02.279625  218923 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 19:58:02.279809  218923 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 19:58:02.434303  218923 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 19:58:02.673467  218923 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 19:58:03.679747  218923 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	W1006 19:58:02.467487  214483 pod_ready.go:104] pod "coredns-66bc5c9577-bns67" is not "Ready", error: <nil>
	W1006 19:58:04.967229  214483 pod_ready.go:104] pod "coredns-66bc5c9577-bns67" is not "Ready", error: <nil>
	I1006 19:58:05.485089  214483 pod_ready.go:94] pod "coredns-66bc5c9577-bns67" is "Ready"
	I1006 19:58:05.485122  214483 pod_ready.go:86] duration metric: took 31.525127088s for pod "coredns-66bc5c9577-bns67" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:58:05.509416  214483 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:58:05.518284  214483 pod_ready.go:94] pod "etcd-default-k8s-diff-port-997276" is "Ready"
	I1006 19:58:05.518314  214483 pod_ready.go:86] duration metric: took 8.865618ms for pod "etcd-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:58:05.595042  214483 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:58:05.605126  214483 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-997276" is "Ready"
	I1006 19:58:05.605157  214483 pod_ready.go:86] duration metric: took 10.068676ms for pod "kube-apiserver-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:58:05.610428  214483 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:58:05.666393  214483 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-997276" is "Ready"
	I1006 19:58:05.666434  214483 pod_ready.go:86] duration metric: took 55.955539ms for pod "kube-controller-manager-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:58:05.865164  214483 pod_ready.go:83] waiting for pod "kube-proxy-zl7gg" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:58:06.264358  214483 pod_ready.go:94] pod "kube-proxy-zl7gg" is "Ready"
	I1006 19:58:06.264384  214483 pod_ready.go:86] duration metric: took 399.191622ms for pod "kube-proxy-zl7gg" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:58:06.465612  214483 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:58:06.865582  214483 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-997276" is "Ready"
	I1006 19:58:06.865608  214483 pod_ready.go:86] duration metric: took 399.970246ms for pod "kube-scheduler-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:58:06.865620  214483 pod_ready.go:40] duration metric: took 32.914411778s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 19:58:06.940846  214483 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1006 19:58:06.944152  214483 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-997276" cluster and "default" namespace by default
	I1006 19:58:04.549780  218923 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 19:58:05.065568  218923 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 19:58:05.065862  218923 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-053944 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1006 19:58:06.585071  218923 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 19:58:06.585426  218923 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-053944 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1006 19:58:06.985205  218923 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 19:58:07.867730  218923 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 19:58:08.225254  218923 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 19:58:08.225565  218923 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 19:58:08.602025  218923 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 19:58:08.858064  218923 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 19:58:09.162673  218923 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 19:58:10.154418  218923 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 19:58:10.591668  218923 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 19:58:10.592334  218923 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 19:58:10.597120  218923 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 19:58:10.600701  218923 out.go:252]   - Booting up control plane ...
	I1006 19:58:10.600810  218923 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 19:58:10.602467  218923 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 19:58:10.612074  218923 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 19:58:10.633958  218923 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 19:58:10.634086  218923 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 19:58:10.643100  218923 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 19:58:10.643606  218923 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 19:58:10.644040  218923 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 19:58:10.771492  218923 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 19:58:10.771623  218923 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 19:58:11.773060  218923 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001844913s
	I1006 19:58:11.776606  218923 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 19:58:11.776726  218923 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1006 19:58:11.777022  218923 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 19:58:11.777115  218923 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 19:58:17.037765  218923 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.261119374s
	I1006 19:58:17.431149  218923 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.65405661s
	I1006 19:58:18.278204  218923 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.501511828s
	I1006 19:58:18.300947  218923 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1006 19:58:18.315901  218923 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1006 19:58:18.331774  218923 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1006 19:58:18.332009  218923 kubeadm.go:318] [mark-control-plane] Marking the node auto-053944 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1006 19:58:18.344765  218923 kubeadm.go:318] [bootstrap-token] Using token: hnzsyo.7j0qpltq6uuqhwmm
	I1006 19:58:18.347657  218923 out.go:252]   - Configuring RBAC rules ...
	I1006 19:58:18.347809  218923 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1006 19:58:18.352168  218923 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1006 19:58:18.363140  218923 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1006 19:58:18.368041  218923 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1006 19:58:18.373360  218923 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1006 19:58:18.381277  218923 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1006 19:58:18.686499  218923 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1006 19:58:19.131265  218923 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1006 19:58:19.685878  218923 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1006 19:58:19.687588  218923 kubeadm.go:318] 
	I1006 19:58:19.687667  218923 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1006 19:58:19.687676  218923 kubeadm.go:318] 
	I1006 19:58:19.687849  218923 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1006 19:58:19.687865  218923 kubeadm.go:318] 
	I1006 19:58:19.687910  218923 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1006 19:58:19.687976  218923 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1006 19:58:19.688038  218923 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1006 19:58:19.688047  218923 kubeadm.go:318] 
	I1006 19:58:19.688106  218923 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1006 19:58:19.688115  218923 kubeadm.go:318] 
	I1006 19:58:19.688167  218923 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1006 19:58:19.688174  218923 kubeadm.go:318] 
	I1006 19:58:19.688230  218923 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1006 19:58:19.688321  218923 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1006 19:58:19.688401  218923 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1006 19:58:19.688408  218923 kubeadm.go:318] 
	I1006 19:58:19.688506  218923 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1006 19:58:19.688589  218923 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1006 19:58:19.688595  218923 kubeadm.go:318] 
	I1006 19:58:19.688684  218923 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token hnzsyo.7j0qpltq6uuqhwmm \
	I1006 19:58:19.688808  218923 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:201c13f59a2d25d619915dd6f8e3b060debb373b102f8c8209c331adce5a89dd \
	I1006 19:58:19.688834  218923 kubeadm.go:318] 	--control-plane 
	I1006 19:58:19.688841  218923 kubeadm.go:318] 
	I1006 19:58:19.688930  218923 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1006 19:58:19.688938  218923 kubeadm.go:318] 
	I1006 19:58:19.689024  218923 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token hnzsyo.7j0qpltq6uuqhwmm \
	I1006 19:58:19.689153  218923 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:201c13f59a2d25d619915dd6f8e3b060debb373b102f8c8209c331adce5a89dd 
	I1006 19:58:19.693429  218923 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1006 19:58:19.693678  218923 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1006 19:58:19.693823  218923 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 19:58:19.693850  218923 cni.go:84] Creating CNI manager for ""
	I1006 19:58:19.693858  218923 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:58:19.696792  218923 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Oct 06 19:58:10 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:10.239198903Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:58:10 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:10.253829773Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:58:10 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:10.254993134Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:58:10 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:10.280363454Z" level=info msg="Created container f23c7b1e4e02b9f141c926732d0f2535e9f3243ba0bee45a42e1d11d5e2c978c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vtgzs/dashboard-metrics-scraper" id=fa1a3342-30a0-4c73-b00a-bde3648eb9dc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:58:10 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:10.285651742Z" level=info msg="Starting container: f23c7b1e4e02b9f141c926732d0f2535e9f3243ba0bee45a42e1d11d5e2c978c" id=5b11b0cc-297d-4f1f-bc5d-1ab254dac129 name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 19:58:10 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:10.290846407Z" level=info msg="Started container" PID=1667 containerID=f23c7b1e4e02b9f141c926732d0f2535e9f3243ba0bee45a42e1d11d5e2c978c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vtgzs/dashboard-metrics-scraper id=5b11b0cc-297d-4f1f-bc5d-1ab254dac129 name=/runtime.v1.RuntimeService/StartContainer sandboxID=142a5ae69a029c6c56165d4e5486b54ec123e924b46d886cb03084b7334126cb
	Oct 06 19:58:10 default-k8s-diff-port-997276 conmon[1665]: conmon f23c7b1e4e02b9f141c9 <ninfo>: container 1667 exited with status 1
	Oct 06 19:58:10 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:10.607195413Z" level=info msg="Removing container: 21836af816c1af0c04772a3c5e0ed0f8fad0798d185ed7307bbd34590099f4ed" id=5141fa03-597c-44ee-9796-e50b5d5007f0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 06 19:58:10 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:10.644968919Z" level=info msg="Error loading conmon cgroup of container 21836af816c1af0c04772a3c5e0ed0f8fad0798d185ed7307bbd34590099f4ed: cgroup deleted" id=5141fa03-597c-44ee-9796-e50b5d5007f0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 06 19:58:10 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:10.648427535Z" level=info msg="Removed container 21836af816c1af0c04772a3c5e0ed0f8fad0798d185ed7307bbd34590099f4ed: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vtgzs/dashboard-metrics-scraper" id=5141fa03-597c-44ee-9796-e50b5d5007f0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.89783933Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.903295646Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.903332299Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.90335634Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.906489642Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.906529872Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.906548711Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.915822099Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.91585748Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.91587426Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.91884632Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.918878837Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.918898448Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.927033911Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.927073165Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	f23c7b1e4e02b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           12 seconds ago       Exited              dashboard-metrics-scraper   2                   142a5ae69a029       dashboard-metrics-scraper-6ffb444bf9-vtgzs             kubernetes-dashboard
	94543004a692d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           19 seconds ago       Running             storage-provisioner         2                   32b4320e94207       storage-provisioner                                    kube-system
	4e61e37ab2416       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago       Running             kubernetes-dashboard        0                   0fe92d30560f4       kubernetes-dashboard-855c9754f9-l9n6g                  kubernetes-dashboard
	7b0a7293bb031       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           50 seconds ago       Running             kindnet-cni                 1                   1681e32070bc5       kindnet-twtwt                                          kube-system
	d7f4a4e3f96ad       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           50 seconds ago       Running             coredns                     1                   d91653a2873a4       coredns-66bc5c9577-bns67                               kube-system
	021f8a3c8caf4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           50 seconds ago       Running             kube-proxy                  1                   c2b213aa0926c       kube-proxy-zl7gg                                       kube-system
	9f8260c3ac833       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           50 seconds ago       Exited              storage-provisioner         1                   32b4320e94207       storage-provisioner                                    kube-system
	d94f016d6a660       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago       Running             busybox                     1                   68e55d76e792f       busybox                                                default
	69016b9f99d77       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   2c33b7579b455       kube-scheduler-default-k8s-diff-port-997276            kube-system
	d55af54186cd2       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   80a9efa5a960f       kube-controller-manager-default-k8s-diff-port-997276   kube-system
	e8361e8e1ee06       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   1f227176debee       etcd-default-k8s-diff-port-997276                      kube-system
	3ccfad32d3fc7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   721ffe017d46c       kube-apiserver-default-k8s-diff-port-997276            kube-system
	
	
	==> coredns [d7f4a4e3f96ad1311da2b4b0839dfdde33ceb36b5b11c73ecf55feda067baebb] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49702 - 22719 "HINFO IN 6995182671490730965.1443162241631516290. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013380802s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-997276
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-997276
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81
	                    minikube.k8s.io/name=default-k8s-diff-port-997276
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_06T19_55_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 06 Oct 2025 19:55:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-997276
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 06 Oct 2025 19:58:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 06 Oct 2025 19:58:00 +0000   Mon, 06 Oct 2025 19:55:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 06 Oct 2025 19:58:00 +0000   Mon, 06 Oct 2025 19:55:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 06 Oct 2025 19:58:00 +0000   Mon, 06 Oct 2025 19:55:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 06 Oct 2025 19:58:00 +0000   Mon, 06 Oct 2025 19:56:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-997276
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 f613ab9f1dc7456b9fc37f90f2631726
	  System UUID:                4764672c-0e9d-4c30-bf0e-576675527b0d
	  Boot ID:                    28123a58-a718-41b5-bac7-da83ff2d3a0d
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-bns67                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m18s
	  kube-system                 etcd-default-k8s-diff-port-997276                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m24s
	  kube-system                 kindnet-twtwt                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m18s
	  kube-system                 kube-apiserver-default-k8s-diff-port-997276             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-997276    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-proxy-zl7gg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-scheduler-default-k8s-diff-port-997276             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vtgzs              0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-l9n6g                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m17s                  kube-proxy       
	  Normal   Starting                 48s                    kube-proxy       
	  Warning  CgroupV1                 2m32s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m32s (x8 over 2m32s)  kubelet          Node default-k8s-diff-port-997276 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m32s (x8 over 2m32s)  kubelet          Node default-k8s-diff-port-997276 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m32s (x8 over 2m32s)  kubelet          Node default-k8s-diff-port-997276 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m24s                  kubelet          Node default-k8s-diff-port-997276 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m24s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m24s                  kubelet          Node default-k8s-diff-port-997276 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m24s                  kubelet          Node default-k8s-diff-port-997276 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m24s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m19s                  node-controller  Node default-k8s-diff-port-997276 event: Registered Node default-k8s-diff-port-997276 in Controller
	  Normal   NodeReady                97s                    kubelet          Node default-k8s-diff-port-997276 status is now: NodeReady
	  Normal   Starting                 61s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-997276 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-997276 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-997276 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           47s                    node-controller  Node default-k8s-diff-port-997276 event: Registered Node default-k8s-diff-port-997276 in Controller
	
	
	==> dmesg <==
	[Oct 6 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:28] overlayfs: idmapped layers are currently not supported
	[ +38.864395] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:29] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:30] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:31] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:35] overlayfs: idmapped layers are currently not supported
	[ +30.685217] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:37] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:39] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:41] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:48] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:49] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:50] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:52] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:54] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:55] overlayfs: idmapped layers are currently not supported
	[ +29.761517] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:56] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:57] overlayfs: idmapped layers are currently not supported
	[  +2.641672] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:58] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e8361e8e1ee0632342d163d9478d496057250b3eeebfb0c28f6c1c0c8879ec24] <==
	{"level":"warn","ts":"2025-10-06T19:57:26.394070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.436168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.480796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.505019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.538781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.591165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.635816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.678894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.713865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.787224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.815452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.870827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.911995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.951265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.988395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.013692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.038409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.071970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.113706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.151102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.226545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.249875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.281747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.325055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.655820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53158","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:58:23 up  1:40,  0 user,  load average: 5.04, 3.57, 2.46
	Linux default-k8s-diff-port-997276 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7b0a7293bb03120d2db1b7f8559bcd7af73005b9a4cb29e752ff4eff8a6184d5] <==
	I1006 19:57:32.662420       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1006 19:57:32.706878       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1006 19:57:32.707578       1 main.go:148] setting mtu 1500 for CNI 
	I1006 19:57:32.707628       1 main.go:178] kindnetd IP family: "ipv4"
	I1006 19:57:32.707667       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-06T19:57:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1006 19:57:32.896438       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1006 19:57:32.896467       1 controller.go:381] "Waiting for informer caches to sync"
	I1006 19:57:32.896476       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1006 19:57:32.949409       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1006 19:58:02.897585       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1006 19:58:02.897750       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1006 19:58:02.949248       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1006 19:58:02.949354       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1006 19:58:04.475207       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1006 19:58:04.475253       1 metrics.go:72] Registering metrics
	I1006 19:58:04.475356       1 controller.go:711] "Syncing nftables rules"
	I1006 19:58:12.897491       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1006 19:58:12.897564       1 main.go:301] handling current node
	I1006 19:58:22.899784       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1006 19:58:22.899820       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3ccfad32d3fc760df3c3a85997578af79aef13072ed1d94e610c623e692996c5] <==
	I1006 19:57:29.969598       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1006 19:57:29.999421       1 aggregator.go:171] initial CRD sync complete...
	I1006 19:57:29.999446       1 autoregister_controller.go:144] Starting autoregister controller
	I1006 19:57:29.999454       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1006 19:57:29.999461       1 cache.go:39] Caches are synced for autoregister controller
	I1006 19:57:30.027759       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1006 19:57:30.028234       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1006 19:57:30.028279       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1006 19:57:30.028349       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1006 19:57:30.057573       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1006 19:57:30.103962       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1006 19:57:30.103993       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1006 19:57:30.106659       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1006 19:57:30.153052       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1006 19:57:30.174053       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1006 19:57:30.410483       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1006 19:57:32.565599       1 controller.go:667] quota admission added evaluator for: namespaces
	I1006 19:57:33.032833       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1006 19:57:33.273158       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1006 19:57:33.380238       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1006 19:57:33.669214       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.156.166"}
	I1006 19:57:33.695324       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.144.221"}
	I1006 19:57:35.327097       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1006 19:57:35.675102       1 controller.go:667] quota admission added evaluator for: endpoints
	I1006 19:57:35.772719       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d55af54186cd2219cd4bae48a2441c417921efdadcb8fc6384bac72028d350db] <==
	I1006 19:57:35.308606       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1006 19:57:35.313246       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1006 19:57:35.313369       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 19:57:35.313378       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1006 19:57:35.313385       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1006 19:57:35.314313       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1006 19:57:35.314393       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1006 19:57:35.314622       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1006 19:57:35.314638       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1006 19:57:35.316064       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1006 19:57:35.327858       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1006 19:57:35.332685       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1006 19:57:35.335882       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1006 19:57:35.336345       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1006 19:57:35.342397       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1006 19:57:35.343783       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1006 19:57:35.344623       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 19:57:35.346481       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1006 19:57:35.349755       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1006 19:57:35.355791       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1006 19:57:35.356348       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1006 19:57:35.367121       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1006 19:57:35.367231       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1006 19:57:35.371642       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1006 19:57:35.374501       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	
	
	==> kube-proxy [021f8a3c8caf4e3062528b2830e6d3bd81c80b253a054789688df77fd6566fa9] <==
	I1006 19:57:33.717827       1 server_linux.go:53] "Using iptables proxy"
	I1006 19:57:33.852865       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1006 19:57:33.959779       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1006 19:57:33.959812       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1006 19:57:33.959889       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1006 19:57:34.174567       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 19:57:34.174624       1 server_linux.go:132] "Using iptables Proxier"
	I1006 19:57:34.183166       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1006 19:57:34.183475       1 server.go:527] "Version info" version="v1.34.1"
	I1006 19:57:34.183500       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:57:34.184926       1 config.go:200] "Starting service config controller"
	I1006 19:57:34.184947       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1006 19:57:34.184983       1 config.go:106] "Starting endpoint slice config controller"
	I1006 19:57:34.184988       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1006 19:57:34.184998       1 config.go:403] "Starting serviceCIDR config controller"
	I1006 19:57:34.185001       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1006 19:57:34.185654       1 config.go:309] "Starting node config controller"
	I1006 19:57:34.185670       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1006 19:57:34.185677       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1006 19:57:34.285407       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1006 19:57:34.285423       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1006 19:57:34.285455       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [69016b9f99d774f122c6a833714dc8aa9a9bed369630bb89cab483dc73bb2308] <==
	I1006 19:57:27.301688       1 serving.go:386] Generated self-signed cert in-memory
	I1006 19:57:33.039184       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1006 19:57:33.039344       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:57:33.084320       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1006 19:57:33.084435       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1006 19:57:33.084478       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1006 19:57:33.084510       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1006 19:57:33.085379       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:57:33.085413       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:57:33.085565       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 19:57:33.085591       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 19:57:33.197726       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1006 19:57:33.206004       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 19:57:33.206906       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 06 19:57:31 default-k8s-diff-port-997276 kubelet[773]: W1006 19:57:31.988608     773 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4fc3831db94848cd06748cc5e8c8533f2de0215c11120be2aa7b676a4a1c113b/crio-d91653a2873a4cb409b63531895bf6398dfd971f42632321639bed92b14996d2 WatchSource:0}: Error finding container d91653a2873a4cb409b63531895bf6398dfd971f42632321639bed92b14996d2: Status 404 returned error can't find the container with id d91653a2873a4cb409b63531895bf6398dfd971f42632321639bed92b14996d2
	Oct 06 19:57:35 default-k8s-diff-port-997276 kubelet[773]: I1006 19:57:35.933401     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/47bee36e-d532-4335-89c4-581d78beb60b-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-vtgzs\" (UID: \"47bee36e-d532-4335-89c4-581d78beb60b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vtgzs"
	Oct 06 19:57:35 default-k8s-diff-port-997276 kubelet[773]: I1006 19:57:35.933449     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8xfs\" (UniqueName: \"kubernetes.io/projected/6043dfa7-271b-4e1a-be38-b574fee8ce17-kube-api-access-n8xfs\") pod \"kubernetes-dashboard-855c9754f9-l9n6g\" (UID: \"6043dfa7-271b-4e1a-be38-b574fee8ce17\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l9n6g"
	Oct 06 19:57:35 default-k8s-diff-port-997276 kubelet[773]: I1006 19:57:35.933475     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj66x\" (UniqueName: \"kubernetes.io/projected/47bee36e-d532-4335-89c4-581d78beb60b-kube-api-access-nj66x\") pod \"dashboard-metrics-scraper-6ffb444bf9-vtgzs\" (UID: \"47bee36e-d532-4335-89c4-581d78beb60b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vtgzs"
	Oct 06 19:57:35 default-k8s-diff-port-997276 kubelet[773]: I1006 19:57:35.933493     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6043dfa7-271b-4e1a-be38-b574fee8ce17-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-l9n6g\" (UID: \"6043dfa7-271b-4e1a-be38-b574fee8ce17\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l9n6g"
	Oct 06 19:57:36 default-k8s-diff-port-997276 kubelet[773]: W1006 19:57:36.249330     773 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4fc3831db94848cd06748cc5e8c8533f2de0215c11120be2aa7b676a4a1c113b/crio-142a5ae69a029c6c56165d4e5486b54ec123e924b46d886cb03084b7334126cb WatchSource:0}: Error finding container 142a5ae69a029c6c56165d4e5486b54ec123e924b46d886cb03084b7334126cb: Status 404 returned error can't find the container with id 142a5ae69a029c6c56165d4e5486b54ec123e924b46d886cb03084b7334126cb
	Oct 06 19:57:50 default-k8s-diff-port-997276 kubelet[773]: I1006 19:57:50.527534     773 scope.go:117] "RemoveContainer" containerID="6ca18da481f6dff47ce3169c15af45d04f8c51025949d3f11010971a81694c77"
	Oct 06 19:57:50 default-k8s-diff-port-997276 kubelet[773]: I1006 19:57:50.586472     773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l9n6g" podStartSLOduration=8.450800009 podStartE2EDuration="15.586438494s" podCreationTimestamp="2025-10-06 19:57:35 +0000 UTC" firstStartedPulling="2025-10-06 19:57:36.23586088 +0000 UTC m=+15.435737179" lastFinishedPulling="2025-10-06 19:57:43.371499357 +0000 UTC m=+22.571375664" observedRunningTime="2025-10-06 19:57:43.559784431 +0000 UTC m=+22.759660754" watchObservedRunningTime="2025-10-06 19:57:50.586438494 +0000 UTC m=+29.786314793"
	Oct 06 19:57:51 default-k8s-diff-port-997276 kubelet[773]: I1006 19:57:51.538447     773 scope.go:117] "RemoveContainer" containerID="6ca18da481f6dff47ce3169c15af45d04f8c51025949d3f11010971a81694c77"
	Oct 06 19:57:51 default-k8s-diff-port-997276 kubelet[773]: I1006 19:57:51.539419     773 scope.go:117] "RemoveContainer" containerID="21836af816c1af0c04772a3c5e0ed0f8fad0798d185ed7307bbd34590099f4ed"
	Oct 06 19:57:51 default-k8s-diff-port-997276 kubelet[773]: E1006 19:57:51.547018     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vtgzs_kubernetes-dashboard(47bee36e-d532-4335-89c4-581d78beb60b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vtgzs" podUID="47bee36e-d532-4335-89c4-581d78beb60b"
	Oct 06 19:57:52 default-k8s-diff-port-997276 kubelet[773]: I1006 19:57:52.538013     773 scope.go:117] "RemoveContainer" containerID="21836af816c1af0c04772a3c5e0ed0f8fad0798d185ed7307bbd34590099f4ed"
	Oct 06 19:57:52 default-k8s-diff-port-997276 kubelet[773]: E1006 19:57:52.538182     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vtgzs_kubernetes-dashboard(47bee36e-d532-4335-89c4-581d78beb60b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vtgzs" podUID="47bee36e-d532-4335-89c4-581d78beb60b"
	Oct 06 19:57:56 default-k8s-diff-port-997276 kubelet[773]: I1006 19:57:56.192728     773 scope.go:117] "RemoveContainer" containerID="21836af816c1af0c04772a3c5e0ed0f8fad0798d185ed7307bbd34590099f4ed"
	Oct 06 19:57:56 default-k8s-diff-port-997276 kubelet[773]: E1006 19:57:56.192957     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vtgzs_kubernetes-dashboard(47bee36e-d532-4335-89c4-581d78beb60b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vtgzs" podUID="47bee36e-d532-4335-89c4-581d78beb60b"
	Oct 06 19:58:03 default-k8s-diff-port-997276 kubelet[773]: I1006 19:58:03.580976     773 scope.go:117] "RemoveContainer" containerID="9f8260c3ac833cb980660b518720852ae3a00d4314fd06714ca37ecb0e765aa1"
	Oct 06 19:58:10 default-k8s-diff-port-997276 kubelet[773]: I1006 19:58:10.235178     773 scope.go:117] "RemoveContainer" containerID="21836af816c1af0c04772a3c5e0ed0f8fad0798d185ed7307bbd34590099f4ed"
	Oct 06 19:58:10 default-k8s-diff-port-997276 kubelet[773]: I1006 19:58:10.602344     773 scope.go:117] "RemoveContainer" containerID="21836af816c1af0c04772a3c5e0ed0f8fad0798d185ed7307bbd34590099f4ed"
	Oct 06 19:58:10 default-k8s-diff-port-997276 kubelet[773]: I1006 19:58:10.603400     773 scope.go:117] "RemoveContainer" containerID="f23c7b1e4e02b9f141c926732d0f2535e9f3243ba0bee45a42e1d11d5e2c978c"
	Oct 06 19:58:10 default-k8s-diff-port-997276 kubelet[773]: E1006 19:58:10.607136     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vtgzs_kubernetes-dashboard(47bee36e-d532-4335-89c4-581d78beb60b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vtgzs" podUID="47bee36e-d532-4335-89c4-581d78beb60b"
	Oct 06 19:58:16 default-k8s-diff-port-997276 kubelet[773]: I1006 19:58:16.192437     773 scope.go:117] "RemoveContainer" containerID="f23c7b1e4e02b9f141c926732d0f2535e9f3243ba0bee45a42e1d11d5e2c978c"
	Oct 06 19:58:16 default-k8s-diff-port-997276 kubelet[773]: E1006 19:58:16.193140     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vtgzs_kubernetes-dashboard(47bee36e-d532-4335-89c4-581d78beb60b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vtgzs" podUID="47bee36e-d532-4335-89c4-581d78beb60b"
	Oct 06 19:58:19 default-k8s-diff-port-997276 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 06 19:58:19 default-k8s-diff-port-997276 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 06 19:58:19 default-k8s-diff-port-997276 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [4e61e37ab2416491e37ee9385b1a5a53f023ad0488c291c5826cce8b6d887514] <==
	2025/10/06 19:57:43 Using namespace: kubernetes-dashboard
	2025/10/06 19:57:43 Using in-cluster config to connect to apiserver
	2025/10/06 19:57:43 Using secret token for csrf signing
	2025/10/06 19:57:43 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/06 19:57:43 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/06 19:57:43 Successful initial request to the apiserver, version: v1.34.1
	2025/10/06 19:57:43 Generating JWE encryption key
	2025/10/06 19:57:43 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/06 19:57:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/06 19:57:44 Initializing JWE encryption key from synchronized object
	2025/10/06 19:57:44 Creating in-cluster Sidecar client
	2025/10/06 19:57:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/06 19:57:44 Serving insecurely on HTTP port: 9090
	2025/10/06 19:58:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/06 19:57:43 Starting overwatch
	
	
	==> storage-provisioner [94543004a692d2869152c4706f66a2c0539fd2fc422c7b5955c377fa34be7e27] <==
	I1006 19:58:03.656937       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1006 19:58:03.677389       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1006 19:58:03.677521       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1006 19:58:03.680546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:58:07.135958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:58:11.397479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:58:14.995672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:58:18.049367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:58:21.071812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:58:21.079193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1006 19:58:21.079415       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1006 19:58:21.080251       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6aac352d-9443-44be-81f2-135d3c658690", APIVersion:"v1", ResourceVersion:"689", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-997276_6f74d579-0b8f-494a-ae0a-c1787b1a7e8a became leader
	W1006 19:58:21.083003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1006 19:58:21.083293       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-997276_6f74d579-0b8f-494a-ae0a-c1787b1a7e8a!
	W1006 19:58:21.092582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1006 19:58:21.183877       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-997276_6f74d579-0b8f-494a-ae0a-c1787b1a7e8a!
	W1006 19:58:23.098777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:58:23.108250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [9f8260c3ac833cb980660b518720852ae3a00d4314fd06714ca37ecb0e765aa1] <==
	I1006 19:57:32.949471       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1006 19:58:03.026247       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-997276 -n default-k8s-diff-port-997276
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-997276 -n default-k8s-diff-port-997276: exit status 2 (441.085058ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-997276 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-997276
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-997276:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4fc3831db94848cd06748cc5e8c8533f2de0215c11120be2aa7b676a4a1c113b",
	        "Created": "2025-10-06T19:55:30.333531639Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 214640,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T19:57:12.741142127Z",
	            "FinishedAt": "2025-10-06T19:57:11.941718673Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/4fc3831db94848cd06748cc5e8c8533f2de0215c11120be2aa7b676a4a1c113b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4fc3831db94848cd06748cc5e8c8533f2de0215c11120be2aa7b676a4a1c113b/hostname",
	        "HostsPath": "/var/lib/docker/containers/4fc3831db94848cd06748cc5e8c8533f2de0215c11120be2aa7b676a4a1c113b/hosts",
	        "LogPath": "/var/lib/docker/containers/4fc3831db94848cd06748cc5e8c8533f2de0215c11120be2aa7b676a4a1c113b/4fc3831db94848cd06748cc5e8c8533f2de0215c11120be2aa7b676a4a1c113b-json.log",
	        "Name": "/default-k8s-diff-port-997276",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-997276:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-997276",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4fc3831db94848cd06748cc5e8c8533f2de0215c11120be2aa7b676a4a1c113b",
	                "LowerDir": "/var/lib/docker/overlay2/ca41d0f8b7cb8f1b3af001a8f5128ec86c614fe6ad93ed4708e799d1261586b4-init/diff:/var/lib/docker/overlay2/950dad8a7486c25883a9845454dded3949e8f3a53166d005e6749ff210bcab80/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ca41d0f8b7cb8f1b3af001a8f5128ec86c614fe6ad93ed4708e799d1261586b4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ca41d0f8b7cb8f1b3af001a8f5128ec86c614fe6ad93ed4708e799d1261586b4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ca41d0f8b7cb8f1b3af001a8f5128ec86c614fe6ad93ed4708e799d1261586b4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-997276",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-997276/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-997276",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-997276",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-997276",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "549805a326b489bff8ea6bee639dadd07fdbb2a4809f0ebf5ec6f38cdf6f3638",
	            "SandboxKey": "/var/run/docker/netns/549805a326b4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-997276": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:90:90:ee:3c:00",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2e10a72004c0565fee9f56eb617f1837118ee48bf9bd5cadbc46998fb4ed527c",
	                    "EndpointID": "cd09782053ae9462a23480eef241c52360d5456bbde72773d527b29fa9d89acb",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-997276",
	                        "4fc3831db948"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-997276 -n default-k8s-diff-port-997276
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-997276 -n default-k8s-diff-port-997276: exit status 2 (470.919735ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-997276 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-997276 logs -n 25: (2.184908953s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-830393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:56 UTC │
	│ delete  │ -p no-preload-314275                                                                                                                                                                                                                          │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ delete  │ -p no-preload-314275                                                                                                                                                                                                                          │ no-preload-314275            │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ delete  │ -p disable-driver-mounts-932453                                                                                                                                                                                                               │ disable-driver-mounts-932453 │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:55 UTC │
	│ start   │ -p default-k8s-diff-port-997276 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:55 UTC │ 06 Oct 25 19:56 UTC │
	│ image   │ embed-certs-830393 image list --format=json                                                                                                                                                                                                   │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │ 06 Oct 25 19:56 UTC │
	│ pause   │ -p embed-certs-830393 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │                     │
	│ delete  │ -p embed-certs-830393                                                                                                                                                                                                                         │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │ 06 Oct 25 19:56 UTC │
	│ delete  │ -p embed-certs-830393                                                                                                                                                                                                                         │ embed-certs-830393           │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │ 06 Oct 25 19:56 UTC │
	│ start   │ -p newest-cni-988436 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │ 06 Oct 25 19:57 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-997276 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:56 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-997276 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:57 UTC │
	│ addons  │ enable metrics-server -p newest-cni-988436 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │                     │
	│ stop    │ -p newest-cni-988436 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:57 UTC │
	│ addons  │ enable dashboard -p newest-cni-988436 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:57 UTC │
	│ start   │ -p newest-cni-988436 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:57 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-997276 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:57 UTC │
	│ start   │ -p default-k8s-diff-port-997276 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:58 UTC │
	│ image   │ newest-cni-988436 image list --format=json                                                                                                                                                                                                    │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:57 UTC │
	│ pause   │ -p newest-cni-988436 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │                     │
	│ delete  │ -p newest-cni-988436                                                                                                                                                                                                                          │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:57 UTC │
	│ delete  │ -p newest-cni-988436                                                                                                                                                                                                                          │ newest-cni-988436            │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │ 06 Oct 25 19:57 UTC │
	│ start   │ -p auto-053944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-053944                  │ jenkins │ v1.37.0 │ 06 Oct 25 19:57 UTC │                     │
	│ image   │ default-k8s-diff-port-997276 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:58 UTC │ 06 Oct 25 19:58 UTC │
	│ pause   │ -p default-k8s-diff-port-997276 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-997276 │ jenkins │ v1.37.0 │ 06 Oct 25 19:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 19:57:43
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 19:57:43.816963  218923 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:57:43.817187  218923 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:57:43.817215  218923 out.go:374] Setting ErrFile to fd 2...
	I1006 19:57:43.817235  218923 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:57:43.817562  218923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:57:43.818031  218923 out.go:368] Setting JSON to false
	I1006 19:57:43.821074  218923 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5999,"bootTime":1759774665,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1006 19:57:43.821190  218923 start.go:140] virtualization:  
	I1006 19:57:43.825301  218923 out.go:179] * [auto-053944] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 19:57:43.830952  218923 notify.go:220] Checking for updates...
	I1006 19:57:43.833805  218923 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 19:57:43.836959  218923 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 19:57:43.840156  218923 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:57:43.844222  218923 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	I1006 19:57:43.847030  218923 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 19:57:43.850090  218923 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 19:57:43.853928  218923 config.go:182] Loaded profile config "default-k8s-diff-port-997276": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:57:43.854115  218923 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 19:57:43.884767  218923 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 19:57:43.884958  218923 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:57:44.016936  218923 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:57:44.003015435 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:57:44.017201  218923 docker.go:318] overlay module found
	I1006 19:57:44.020706  218923 out.go:179] * Using the docker driver based on user configuration
	I1006 19:57:44.027543  218923 start.go:304] selected driver: docker
	I1006 19:57:44.027572  218923 start.go:924] validating driver "docker" against <nil>
	I1006 19:57:44.027587  218923 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 19:57:44.028568  218923 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:57:44.146328  218923 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:57:44.137706405 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:57:44.146483  218923 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 19:57:44.146704  218923 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 19:57:44.149840  218923 out.go:179] * Using Docker driver with root privileges
	I1006 19:57:44.153075  218923 cni.go:84] Creating CNI manager for ""
	I1006 19:57:44.153148  218923 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:57:44.153156  218923 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 19:57:44.153236  218923 start.go:348] cluster config:
	{Name:auto-053944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-053944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1006 19:57:44.156837  218923 out.go:179] * Starting "auto-053944" primary control-plane node in "auto-053944" cluster
	I1006 19:57:44.159978  218923 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 19:57:44.163111  218923 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 19:57:44.165963  218923 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:57:44.166116  218923 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1006 19:57:44.166136  218923 cache.go:58] Caching tarball of preloaded images
	I1006 19:57:44.166453  218923 preload.go:233] Found /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1006 19:57:44.166466  218923 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1006 19:57:44.166576  218923 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/config.json ...
	I1006 19:57:44.166599  218923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/config.json: {Name:mk475550cdb661222b5a12bc2da86a7ec1e44c49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:57:44.166754  218923 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 19:57:44.194146  218923 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 19:57:44.194192  218923 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 19:57:44.194206  218923 cache.go:232] Successfully downloaded all kic artifacts
	I1006 19:57:44.194228  218923 start.go:360] acquireMachinesLock for auto-053944: {Name:mk39469c2dc6ed40f3259891729e63ae3e1e557a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 19:57:44.194336  218923 start.go:364] duration metric: took 93.352µs to acquireMachinesLock for "auto-053944"
	I1006 19:57:44.194360  218923 start.go:93] Provisioning new machine with config: &{Name:auto-053944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-053944 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 19:57:44.194419  218923 start.go:125] createHost starting for "" (driver="docker")
	W1006 19:57:42.466618  214483 pod_ready.go:104] pod "coredns-66bc5c9577-bns67" is not "Ready", error: <nil>
	W1006 19:57:44.475252  214483 pod_ready.go:104] pod "coredns-66bc5c9577-bns67" is not "Ready", error: <nil>
	W1006 19:57:46.967076  214483 pod_ready.go:104] pod "coredns-66bc5c9577-bns67" is not "Ready", error: <nil>
	I1006 19:57:44.198147  218923 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1006 19:57:44.198409  218923 start.go:159] libmachine.API.Create for "auto-053944" (driver="docker")
	I1006 19:57:44.198445  218923 client.go:168] LocalClient.Create starting
	I1006 19:57:44.198519  218923 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem
	I1006 19:57:44.198554  218923 main.go:141] libmachine: Decoding PEM data...
	I1006 19:57:44.198567  218923 main.go:141] libmachine: Parsing certificate...
	I1006 19:57:44.198616  218923 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem
	I1006 19:57:44.198634  218923 main.go:141] libmachine: Decoding PEM data...
	I1006 19:57:44.198644  218923 main.go:141] libmachine: Parsing certificate...
	I1006 19:57:44.198999  218923 cli_runner.go:164] Run: docker network inspect auto-053944 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 19:57:44.223325  218923 cli_runner.go:211] docker network inspect auto-053944 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 19:57:44.223413  218923 network_create.go:284] running [docker network inspect auto-053944] to gather additional debugging logs...
	I1006 19:57:44.223428  218923 cli_runner.go:164] Run: docker network inspect auto-053944
	W1006 19:57:44.248876  218923 cli_runner.go:211] docker network inspect auto-053944 returned with exit code 1
	I1006 19:57:44.248901  218923 network_create.go:287] error running [docker network inspect auto-053944]: docker network inspect auto-053944: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-053944 not found
	I1006 19:57:44.248913  218923 network_create.go:289] output of [docker network inspect auto-053944]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-053944 not found
	
	** /stderr **
	I1006 19:57:44.249022  218923 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:57:44.288345  218923 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7058eae896da IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:57:c0:bd:de:1b} reservation:<nil>}
	I1006 19:57:44.288675  218923 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e321ce8cf6dc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:12:e6:76:87:fe:bb} reservation:<nil>}
	I1006 19:57:44.289104  218923 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-17bdbd7bd7be IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:22:93:f3:19:55:10} reservation:<nil>}
	I1006 19:57:44.289370  218923 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-2e10a72004c0 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:16:2c:92:d4:96:5e} reservation:<nil>}
	I1006 19:57:44.289745  218923 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a35510}
	I1006 19:57:44.289765  218923 network_create.go:124] attempt to create docker network auto-053944 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1006 19:57:44.289825  218923 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-053944 auto-053944
	I1006 19:57:44.369041  218923 network_create.go:108] docker network auto-053944 192.168.85.0/24 created
	I1006 19:57:44.369069  218923 kic.go:121] calculated static IP "192.168.85.2" for the "auto-053944" container
	I1006 19:57:44.369142  218923 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 19:57:44.391791  218923 cli_runner.go:164] Run: docker volume create auto-053944 --label name.minikube.sigs.k8s.io=auto-053944 --label created_by.minikube.sigs.k8s.io=true
	I1006 19:57:44.427781  218923 oci.go:103] Successfully created a docker volume auto-053944
	I1006 19:57:44.427875  218923 cli_runner.go:164] Run: docker run --rm --name auto-053944-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-053944 --entrypoint /usr/bin/test -v auto-053944:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 19:57:45.250602  218923 oci.go:107] Successfully prepared a docker volume auto-053944
	I1006 19:57:45.250649  218923 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:57:45.250673  218923 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 19:57:45.250749  218923 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-053944:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	W1006 19:57:49.467247  214483 pod_ready.go:104] pod "coredns-66bc5c9577-bns67" is not "Ready", error: <nil>
	W1006 19:57:51.467819  214483 pod_ready.go:104] pod "coredns-66bc5c9577-bns67" is not "Ready", error: <nil>
	I1006 19:57:50.390123  218923 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-053944:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (5.139334705s)
	I1006 19:57:50.390152  218923 kic.go:203] duration metric: took 5.139476246s to extract preloaded images to volume ...
	W1006 19:57:50.390283  218923 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1006 19:57:50.390383  218923 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 19:57:50.501496  218923 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-053944 --name auto-053944 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-053944 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-053944 --network auto-053944 --ip 192.168.85.2 --volume auto-053944:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 19:57:50.849869  218923 cli_runner.go:164] Run: docker container inspect auto-053944 --format={{.State.Running}}
	I1006 19:57:50.873557  218923 cli_runner.go:164] Run: docker container inspect auto-053944 --format={{.State.Status}}
	I1006 19:57:50.899329  218923 cli_runner.go:164] Run: docker exec auto-053944 stat /var/lib/dpkg/alternatives/iptables
	I1006 19:57:50.958456  218923 oci.go:144] the created container "auto-053944" has a running status.
	I1006 19:57:50.958493  218923 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/auto-053944/id_rsa...
	I1006 19:57:52.220062  218923 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-2540/.minikube/machines/auto-053944/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 19:57:52.240655  218923 cli_runner.go:164] Run: docker container inspect auto-053944 --format={{.State.Status}}
	I1006 19:57:52.257944  218923 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 19:57:52.257972  218923 kic_runner.go:114] Args: [docker exec --privileged auto-053944 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 19:57:52.298709  218923 cli_runner.go:164] Run: docker container inspect auto-053944 --format={{.State.Status}}
	I1006 19:57:52.317476  218923 machine.go:93] provisionDockerMachine start ...
	I1006 19:57:52.317568  218923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-053944
	I1006 19:57:52.351065  218923 main.go:141] libmachine: Using SSH client type: native
	I1006 19:57:52.351517  218923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33100 <nil> <nil>}
	I1006 19:57:52.351529  218923 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 19:57:52.352807  218923 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	W1006 19:57:53.965541  214483 pod_ready.go:104] pod "coredns-66bc5c9577-bns67" is not "Ready", error: <nil>
	W1006 19:57:55.965625  214483 pod_ready.go:104] pod "coredns-66bc5c9577-bns67" is not "Ready", error: <nil>
	I1006 19:57:55.491376  218923 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-053944
	
	I1006 19:57:55.491401  218923 ubuntu.go:182] provisioning hostname "auto-053944"
	I1006 19:57:55.491470  218923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-053944
	I1006 19:57:55.509159  218923 main.go:141] libmachine: Using SSH client type: native
	I1006 19:57:55.509477  218923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33100 <nil> <nil>}
	I1006 19:57:55.509493  218923 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-053944 && echo "auto-053944" | sudo tee /etc/hostname
	I1006 19:57:55.658382  218923 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-053944
	
	I1006 19:57:55.658466  218923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-053944
	I1006 19:57:55.678044  218923 main.go:141] libmachine: Using SSH client type: native
	I1006 19:57:55.678373  218923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33100 <nil> <nil>}
	I1006 19:57:55.678395  218923 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-053944' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-053944/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-053944' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 19:57:55.814596  218923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 19:57:55.814623  218923 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-2540/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-2540/.minikube}
	I1006 19:57:55.814681  218923 ubuntu.go:190] setting up certificates
	I1006 19:57:55.814701  218923 provision.go:84] configureAuth start
	I1006 19:57:55.814777  218923 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-053944
	I1006 19:57:55.832054  218923 provision.go:143] copyHostCerts
	I1006 19:57:55.832129  218923 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem, removing ...
	I1006 19:57:55.832165  218923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem
	I1006 19:57:55.832248  218923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/ca.pem (1082 bytes)
	I1006 19:57:55.832347  218923 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem, removing ...
	I1006 19:57:55.832358  218923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem
	I1006 19:57:55.832385  218923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/cert.pem (1123 bytes)
	I1006 19:57:55.832442  218923 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem, removing ...
	I1006 19:57:55.832451  218923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem
	I1006 19:57:55.832474  218923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-2540/.minikube/key.pem (1675 bytes)
	I1006 19:57:55.832549  218923 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem org=jenkins.auto-053944 san=[127.0.0.1 192.168.85.2 auto-053944 localhost minikube]
	I1006 19:57:56.709165  218923 provision.go:177] copyRemoteCerts
	I1006 19:57:56.709238  218923 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 19:57:56.709285  218923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-053944
	I1006 19:57:56.729786  218923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/auto-053944/id_rsa Username:docker}
	I1006 19:57:56.827537  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 19:57:56.847149  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1006 19:57:56.865493  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 19:57:56.883903  218923 provision.go:87] duration metric: took 1.069180531s to configureAuth
	I1006 19:57:56.883934  218923 ubuntu.go:206] setting minikube options for container-runtime
	I1006 19:57:56.884132  218923 config.go:182] Loaded profile config "auto-053944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:57:56.884262  218923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-053944
	I1006 19:57:56.901607  218923 main.go:141] libmachine: Using SSH client type: native
	I1006 19:57:56.901913  218923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33100 <nil> <nil>}
	I1006 19:57:56.901934  218923 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 19:57:57.161883  218923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 19:57:57.161909  218923 machine.go:96] duration metric: took 4.8444138s to provisionDockerMachine
	I1006 19:57:57.161918  218923 client.go:171] duration metric: took 12.963467524s to LocalClient.Create
	I1006 19:57:57.161931  218923 start.go:167] duration metric: took 12.96352451s to libmachine.API.Create "auto-053944"
	I1006 19:57:57.161937  218923 start.go:293] postStartSetup for "auto-053944" (driver="docker")
	I1006 19:57:57.161946  218923 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 19:57:57.162009  218923 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 19:57:57.162067  218923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-053944
	I1006 19:57:57.189437  218923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/auto-053944/id_rsa Username:docker}
	I1006 19:57:57.287588  218923 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 19:57:57.290825  218923 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 19:57:57.290852  218923 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 19:57:57.290864  218923 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/addons for local assets ...
	I1006 19:57:57.290918  218923 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-2540/.minikube/files for local assets ...
	I1006 19:57:57.291011  218923 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem -> 43502.pem in /etc/ssl/certs
	I1006 19:57:57.291120  218923 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 19:57:57.298467  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:57:57.316403  218923 start.go:296] duration metric: took 154.452313ms for postStartSetup
	I1006 19:57:57.316777  218923 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-053944
	I1006 19:57:57.338435  218923 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/config.json ...
	I1006 19:57:57.338735  218923 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 19:57:57.338790  218923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-053944
	I1006 19:57:57.356364  218923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/auto-053944/id_rsa Username:docker}
	I1006 19:57:57.448697  218923 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 19:57:57.453787  218923 start.go:128] duration metric: took 13.259354883s to createHost
	I1006 19:57:57.453813  218923 start.go:83] releasing machines lock for "auto-053944", held for 13.259468542s
	I1006 19:57:57.453884  218923 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-053944
	I1006 19:57:57.472900  218923 ssh_runner.go:195] Run: cat /version.json
	I1006 19:57:57.472923  218923 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 19:57:57.472953  218923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-053944
	I1006 19:57:57.472989  218923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-053944
	I1006 19:57:57.494809  218923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/auto-053944/id_rsa Username:docker}
	I1006 19:57:57.495883  218923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/auto-053944/id_rsa Username:docker}
	I1006 19:57:57.591285  218923 ssh_runner.go:195] Run: systemctl --version
	I1006 19:57:57.680041  218923 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 19:57:57.717917  218923 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 19:57:57.722312  218923 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 19:57:57.722382  218923 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 19:57:57.750875  218923 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1006 19:57:57.750896  218923 start.go:495] detecting cgroup driver to use...
	I1006 19:57:57.750928  218923 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 19:57:57.750994  218923 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 19:57:57.768047  218923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 19:57:57.781455  218923 docker.go:218] disabling cri-docker service (if available) ...
	I1006 19:57:57.781526  218923 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 19:57:57.801040  218923 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 19:57:57.822720  218923 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 19:57:57.943479  218923 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 19:57:58.078115  218923 docker.go:234] disabling docker service ...
	I1006 19:57:58.078197  218923 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 19:57:58.103016  218923 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 19:57:58.117098  218923 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 19:57:58.239446  218923 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 19:57:58.367865  218923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 19:57:58.381209  218923 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 19:57:58.397098  218923 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1006 19:57:58.397219  218923 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:58.405952  218923 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 19:57:58.406052  218923 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:58.415507  218923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:58.425228  218923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:58.434240  218923 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 19:57:58.442725  218923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:58.452880  218923 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:58.470392  218923 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 19:57:58.479948  218923 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 19:57:58.488386  218923 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 19:57:58.495950  218923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:57:58.612206  218923 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 19:57:58.746993  218923 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 19:57:58.747098  218923 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 19:57:58.751167  218923 start.go:563] Will wait 60s for crictl version
	I1006 19:57:58.751276  218923 ssh_runner.go:195] Run: which crictl
	I1006 19:57:58.754987  218923 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 19:57:58.788087  218923 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1006 19:57:58.788199  218923 ssh_runner.go:195] Run: crio --version
	I1006 19:57:58.830668  218923 ssh_runner.go:195] Run: crio --version
	I1006 19:57:58.867362  218923 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1006 19:57:58.870144  218923 cli_runner.go:164] Run: docker network inspect auto-053944 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 19:57:58.887128  218923 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1006 19:57:58.891235  218923 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:57:58.901634  218923 kubeadm.go:883] updating cluster {Name:auto-053944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-053944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 19:57:58.901749  218923 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 19:57:58.901815  218923 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:57:58.948930  218923 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:57:58.948955  218923 crio.go:433] Images already preloaded, skipping extraction
	I1006 19:57:58.949018  218923 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 19:57:58.981424  218923 crio.go:514] all images are preloaded for cri-o runtime.
	I1006 19:57:58.981455  218923 cache_images.go:85] Images are preloaded, skipping loading
	I1006 19:57:58.981463  218923 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1006 19:57:58.981544  218923 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-053944 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-053944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 19:57:58.981643  218923 ssh_runner.go:195] Run: crio config
	I1006 19:57:59.057069  218923 cni.go:84] Creating CNI manager for ""
	I1006 19:57:59.057095  218923 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:57:59.057109  218923 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 19:57:59.057161  218923 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-053944 NodeName:auto-053944 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 19:57:59.057337  218923 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-053944"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 19:57:59.057432  218923 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 19:57:59.065760  218923 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 19:57:59.065857  218923 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 19:57:59.073834  218923 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1006 19:57:59.087463  218923 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 19:57:59.100737  218923 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1006 19:57:59.114683  218923 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1006 19:57:59.118476  218923 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 19:57:59.128523  218923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 19:57:59.246018  218923 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 19:57:59.263293  218923 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944 for IP: 192.168.85.2
	I1006 19:57:59.263364  218923 certs.go:195] generating shared ca certs ...
	I1006 19:57:59.263400  218923 certs.go:227] acquiring lock for ca certs: {Name:mke29bef1b13829052576090d66d8864f7cbc64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:57:59.263569  218923 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key
	I1006 19:57:59.263648  218923 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key
	I1006 19:57:59.263671  218923 certs.go:257] generating profile certs ...
	I1006 19:57:59.263795  218923 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/client.key
	I1006 19:57:59.263848  218923 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/client.crt with IP's: []
	I1006 19:57:59.922101  218923 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/client.crt ...
	I1006 19:57:59.922139  218923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/client.crt: {Name:mk9a4a220a47ff3ca80c57e982cfdac4ebcba118 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:57:59.922341  218923 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/client.key ...
	I1006 19:57:59.922355  218923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/client.key: {Name:mkf958b4adec13e459bb8782b35d81ceacb5ac4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:57:59.922449  218923 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/apiserver.key.77e7bfad
	I1006 19:57:59.922464  218923 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/apiserver.crt.77e7bfad with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1006 19:58:01.035790  218923 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/apiserver.crt.77e7bfad ...
	I1006 19:58:01.035865  218923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/apiserver.crt.77e7bfad: {Name:mk8ff224b85267052de956b9f235788db252faa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:58:01.036193  218923 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/apiserver.key.77e7bfad ...
	I1006 19:58:01.036237  218923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/apiserver.key.77e7bfad: {Name:mkd1c5846047e929423b81ed27ea8d0d62dfd78e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:58:01.036367  218923 certs.go:382] copying /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/apiserver.crt.77e7bfad -> /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/apiserver.crt
	I1006 19:58:01.036490  218923 certs.go:386] copying /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/apiserver.key.77e7bfad -> /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/apiserver.key
	I1006 19:58:01.036594  218923 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/proxy-client.key
	I1006 19:58:01.036650  218923 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/proxy-client.crt with IP's: []
	I1006 19:58:01.535938  218923 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/proxy-client.crt ...
	I1006 19:58:01.535975  218923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/proxy-client.crt: {Name:mk4a9b87970e1b28b78560b009769c5b9b6d281d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:58:01.536170  218923 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/proxy-client.key ...
	I1006 19:58:01.536184  218923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/proxy-client.key: {Name:mkfaaa9b5cfbe207b7f35dfe33c3c1dc7ec4f2a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 19:58:01.536396  218923 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem (1338 bytes)
	W1006 19:58:01.536449  218923 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350_empty.pem, impossibly tiny 0 bytes
	I1006 19:58:01.536461  218923 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 19:58:01.536489  218923 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/ca.pem (1082 bytes)
	I1006 19:58:01.536515  218923 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/cert.pem (1123 bytes)
	I1006 19:58:01.536540  218923 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/certs/key.pem (1675 bytes)
	I1006 19:58:01.536587  218923 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem (1708 bytes)
	I1006 19:58:01.537208  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 19:58:01.556186  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 19:58:01.578013  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 19:58:01.597241  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 19:58:01.614965  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1006 19:58:01.633980  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 19:58:01.653720  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 19:58:01.673550  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 19:58:01.692539  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/certs/4350.pem --> /usr/share/ca-certificates/4350.pem (1338 bytes)
	I1006 19:58:01.713171  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/ssl/certs/43502.pem --> /usr/share/ca-certificates/43502.pem (1708 bytes)
	I1006 19:58:01.732152  218923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-2540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 19:58:01.751469  218923 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 19:58:01.765178  218923 ssh_runner.go:195] Run: openssl version
	I1006 19:58:01.771623  218923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 19:58:01.781663  218923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:58:01.786199  218923 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 18:43 /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:58:01.786286  218923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 19:58:01.839231  218923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 19:58:01.848243  218923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4350.pem && ln -fs /usr/share/ca-certificates/4350.pem /etc/ssl/certs/4350.pem"
	I1006 19:58:01.856932  218923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4350.pem
	I1006 19:58:01.860883  218923 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 18:49 /usr/share/ca-certificates/4350.pem
	I1006 19:58:01.860957  218923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4350.pem
	I1006 19:58:01.902504  218923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4350.pem /etc/ssl/certs/51391683.0"
	I1006 19:58:01.910917  218923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43502.pem && ln -fs /usr/share/ca-certificates/43502.pem /etc/ssl/certs/43502.pem"
	I1006 19:58:01.919432  218923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43502.pem
	I1006 19:58:01.923324  218923 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 18:49 /usr/share/ca-certificates/43502.pem
	I1006 19:58:01.923392  218923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43502.pem
	I1006 19:58:01.971041  218923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43502.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 19:58:01.981133  218923 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 19:58:01.985133  218923 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 19:58:01.985185  218923 kubeadm.go:400] StartCluster: {Name:auto-053944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-053944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:58:01.985269  218923 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 19:58:01.985344  218923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 19:58:02.013783  218923 cri.go:89] found id: ""
	I1006 19:58:02.013930  218923 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 19:58:02.032994  218923 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 19:58:02.042155  218923 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 19:58:02.042224  218923 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 19:58:02.051835  218923 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 19:58:02.051900  218923 kubeadm.go:157] found existing configuration files:
	
	I1006 19:58:02.051982  218923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 19:58:02.060507  218923 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 19:58:02.060619  218923 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 19:58:02.068473  218923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 19:58:02.078800  218923 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 19:58:02.078893  218923 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 19:58:02.086842  218923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 19:58:02.095029  218923 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 19:58:02.095126  218923 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 19:58:02.108239  218923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 19:58:02.117648  218923 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 19:58:02.117753  218923 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 19:58:02.125851  218923 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 19:58:02.166460  218923 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 19:58:02.166604  218923 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 19:58:02.193510  218923 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 19:58:02.193610  218923 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1006 19:58:02.193667  218923 kubeadm.go:318] OS: Linux
	I1006 19:58:02.193730  218923 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 19:58:02.193799  218923 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1006 19:58:02.193866  218923 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 19:58:02.193936  218923 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 19:58:02.194021  218923 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 19:58:02.194089  218923 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 19:58:02.194150  218923 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 19:58:02.194215  218923 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 19:58:02.194280  218923 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1006 19:58:02.265437  218923 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 19:58:02.265621  218923 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 19:58:02.265785  218923 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 19:58:02.273538  218923 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1006 19:57:57.968469  214483 pod_ready.go:104] pod "coredns-66bc5c9577-bns67" is not "Ready", error: <nil>
	W1006 19:57:59.968739  214483 pod_ready.go:104] pod "coredns-66bc5c9577-bns67" is not "Ready", error: <nil>
	I1006 19:58:02.279476  218923 out.go:252]   - Generating certificates and keys ...
	I1006 19:58:02.279625  218923 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 19:58:02.279809  218923 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 19:58:02.434303  218923 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 19:58:02.673467  218923 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 19:58:03.679747  218923 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	W1006 19:58:02.467487  214483 pod_ready.go:104] pod "coredns-66bc5c9577-bns67" is not "Ready", error: <nil>
	W1006 19:58:04.967229  214483 pod_ready.go:104] pod "coredns-66bc5c9577-bns67" is not "Ready", error: <nil>
	I1006 19:58:05.485089  214483 pod_ready.go:94] pod "coredns-66bc5c9577-bns67" is "Ready"
	I1006 19:58:05.485122  214483 pod_ready.go:86] duration metric: took 31.525127088s for pod "coredns-66bc5c9577-bns67" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:58:05.509416  214483 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:58:05.518284  214483 pod_ready.go:94] pod "etcd-default-k8s-diff-port-997276" is "Ready"
	I1006 19:58:05.518314  214483 pod_ready.go:86] duration metric: took 8.865618ms for pod "etcd-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:58:05.595042  214483 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:58:05.605126  214483 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-997276" is "Ready"
	I1006 19:58:05.605157  214483 pod_ready.go:86] duration metric: took 10.068676ms for pod "kube-apiserver-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:58:05.610428  214483 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:58:05.666393  214483 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-997276" is "Ready"
	I1006 19:58:05.666434  214483 pod_ready.go:86] duration metric: took 55.955539ms for pod "kube-controller-manager-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:58:05.865164  214483 pod_ready.go:83] waiting for pod "kube-proxy-zl7gg" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:58:06.264358  214483 pod_ready.go:94] pod "kube-proxy-zl7gg" is "Ready"
	I1006 19:58:06.264384  214483 pod_ready.go:86] duration metric: took 399.191622ms for pod "kube-proxy-zl7gg" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:58:06.465612  214483 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:58:06.865582  214483 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-997276" is "Ready"
	I1006 19:58:06.865608  214483 pod_ready.go:86] duration metric: took 399.970246ms for pod "kube-scheduler-default-k8s-diff-port-997276" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 19:58:06.865620  214483 pod_ready.go:40] duration metric: took 32.914411778s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 19:58:06.940846  214483 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1006 19:58:06.944152  214483 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-997276" cluster and "default" namespace by default
	I1006 19:58:04.549780  218923 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 19:58:05.065568  218923 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 19:58:05.065862  218923 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-053944 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1006 19:58:06.585071  218923 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 19:58:06.585426  218923 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-053944 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1006 19:58:06.985205  218923 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 19:58:07.867730  218923 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 19:58:08.225254  218923 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 19:58:08.225565  218923 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 19:58:08.602025  218923 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 19:58:08.858064  218923 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 19:58:09.162673  218923 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 19:58:10.154418  218923 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 19:58:10.591668  218923 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 19:58:10.592334  218923 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 19:58:10.597120  218923 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 19:58:10.600701  218923 out.go:252]   - Booting up control plane ...
	I1006 19:58:10.600810  218923 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 19:58:10.602467  218923 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 19:58:10.612074  218923 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 19:58:10.633958  218923 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 19:58:10.634086  218923 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 19:58:10.643100  218923 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 19:58:10.643606  218923 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 19:58:10.644040  218923 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 19:58:10.771492  218923 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 19:58:10.771623  218923 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 19:58:11.773060  218923 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001844913s
	I1006 19:58:11.776606  218923 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 19:58:11.776726  218923 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1006 19:58:11.777022  218923 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 19:58:11.777115  218923 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 19:58:17.037765  218923 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.261119374s
	I1006 19:58:17.431149  218923 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.65405661s
	I1006 19:58:18.278204  218923 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.501511828s
	I1006 19:58:18.300947  218923 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1006 19:58:18.315901  218923 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1006 19:58:18.331774  218923 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1006 19:58:18.332009  218923 kubeadm.go:318] [mark-control-plane] Marking the node auto-053944 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1006 19:58:18.344765  218923 kubeadm.go:318] [bootstrap-token] Using token: hnzsyo.7j0qpltq6uuqhwmm
	I1006 19:58:18.347657  218923 out.go:252]   - Configuring RBAC rules ...
	I1006 19:58:18.347809  218923 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1006 19:58:18.352168  218923 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1006 19:58:18.363140  218923 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1006 19:58:18.368041  218923 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1006 19:58:18.373360  218923 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1006 19:58:18.381277  218923 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1006 19:58:18.686499  218923 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1006 19:58:19.131265  218923 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1006 19:58:19.685878  218923 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1006 19:58:19.687588  218923 kubeadm.go:318] 
	I1006 19:58:19.687667  218923 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1006 19:58:19.687676  218923 kubeadm.go:318] 
	I1006 19:58:19.687849  218923 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1006 19:58:19.687865  218923 kubeadm.go:318] 
	I1006 19:58:19.687910  218923 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1006 19:58:19.687976  218923 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1006 19:58:19.688038  218923 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1006 19:58:19.688047  218923 kubeadm.go:318] 
	I1006 19:58:19.688106  218923 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1006 19:58:19.688115  218923 kubeadm.go:318] 
	I1006 19:58:19.688167  218923 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1006 19:58:19.688174  218923 kubeadm.go:318] 
	I1006 19:58:19.688230  218923 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1006 19:58:19.688321  218923 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1006 19:58:19.688401  218923 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1006 19:58:19.688408  218923 kubeadm.go:318] 
	I1006 19:58:19.688506  218923 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1006 19:58:19.688589  218923 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1006 19:58:19.688595  218923 kubeadm.go:318] 
	I1006 19:58:19.688684  218923 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token hnzsyo.7j0qpltq6uuqhwmm \
	I1006 19:58:19.688808  218923 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:201c13f59a2d25d619915dd6f8e3b060debb373b102f8c8209c331adce5a89dd \
	I1006 19:58:19.688834  218923 kubeadm.go:318] 	--control-plane 
	I1006 19:58:19.688841  218923 kubeadm.go:318] 
	I1006 19:58:19.688930  218923 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1006 19:58:19.688938  218923 kubeadm.go:318] 
	I1006 19:58:19.689024  218923 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token hnzsyo.7j0qpltq6uuqhwmm \
	I1006 19:58:19.689153  218923 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:201c13f59a2d25d619915dd6f8e3b060debb373b102f8c8209c331adce5a89dd 
	I1006 19:58:19.693429  218923 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1006 19:58:19.693678  218923 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1006 19:58:19.693823  218923 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 19:58:19.693850  218923 cni.go:84] Creating CNI manager for ""
	I1006 19:58:19.693858  218923 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 19:58:19.696792  218923 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1006 19:58:19.699662  218923 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1006 19:58:19.704784  218923 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1006 19:58:19.704810  218923 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1006 19:58:19.720624  218923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1006 19:58:20.285894  218923 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1006 19:58:20.286041  218923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:58:20.286106  218923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-053944 minikube.k8s.io/updated_at=2025_10_06T19_58_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81 minikube.k8s.io/name=auto-053944 minikube.k8s.io/primary=true
	I1006 19:58:20.515778  218923 ops.go:34] apiserver oom_adj: -16
	I1006 19:58:20.515894  218923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:58:21.016925  218923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:58:21.516021  218923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:58:22.016697  218923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:58:22.516861  218923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:58:23.016584  218923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 19:58:23.517596  218923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Oct 06 19:58:10 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:10.239198903Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:58:10 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:10.253829773Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:58:10 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:10.254993134Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 06 19:58:10 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:10.280363454Z" level=info msg="Created container f23c7b1e4e02b9f141c926732d0f2535e9f3243ba0bee45a42e1d11d5e2c978c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vtgzs/dashboard-metrics-scraper" id=fa1a3342-30a0-4c73-b00a-bde3648eb9dc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 19:58:10 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:10.285651742Z" level=info msg="Starting container: f23c7b1e4e02b9f141c926732d0f2535e9f3243ba0bee45a42e1d11d5e2c978c" id=5b11b0cc-297d-4f1f-bc5d-1ab254dac129 name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 19:58:10 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:10.290846407Z" level=info msg="Started container" PID=1667 containerID=f23c7b1e4e02b9f141c926732d0f2535e9f3243ba0bee45a42e1d11d5e2c978c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vtgzs/dashboard-metrics-scraper id=5b11b0cc-297d-4f1f-bc5d-1ab254dac129 name=/runtime.v1.RuntimeService/StartContainer sandboxID=142a5ae69a029c6c56165d4e5486b54ec123e924b46d886cb03084b7334126cb
	Oct 06 19:58:10 default-k8s-diff-port-997276 conmon[1665]: conmon f23c7b1e4e02b9f141c9 <ninfo>: container 1667 exited with status 1
	Oct 06 19:58:10 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:10.607195413Z" level=info msg="Removing container: 21836af816c1af0c04772a3c5e0ed0f8fad0798d185ed7307bbd34590099f4ed" id=5141fa03-597c-44ee-9796-e50b5d5007f0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 06 19:58:10 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:10.644968919Z" level=info msg="Error loading conmon cgroup of container 21836af816c1af0c04772a3c5e0ed0f8fad0798d185ed7307bbd34590099f4ed: cgroup deleted" id=5141fa03-597c-44ee-9796-e50b5d5007f0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 06 19:58:10 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:10.648427535Z" level=info msg="Removed container 21836af816c1af0c04772a3c5e0ed0f8fad0798d185ed7307bbd34590099f4ed: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vtgzs/dashboard-metrics-scraper" id=5141fa03-597c-44ee-9796-e50b5d5007f0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.89783933Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.903295646Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.903332299Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.90335634Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.906489642Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.906529872Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.906548711Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.915822099Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.91585748Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.91587426Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.91884632Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.918878837Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.918898448Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.927033911Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 19:58:12 default-k8s-diff-port-997276 crio[648]: time="2025-10-06T19:58:12.927073165Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	f23c7b1e4e02b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago       Exited              dashboard-metrics-scraper   2                   142a5ae69a029       dashboard-metrics-scraper-6ffb444bf9-vtgzs             kubernetes-dashboard
	94543004a692d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago       Running             storage-provisioner         2                   32b4320e94207       storage-provisioner                                    kube-system
	4e61e37ab2416       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago       Running             kubernetes-dashboard        0                   0fe92d30560f4       kubernetes-dashboard-855c9754f9-l9n6g                  kubernetes-dashboard
	7b0a7293bb031       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago       Running             kindnet-cni                 1                   1681e32070bc5       kindnet-twtwt                                          kube-system
	d7f4a4e3f96ad       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago       Running             coredns                     1                   d91653a2873a4       coredns-66bc5c9577-bns67                               kube-system
	021f8a3c8caf4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           53 seconds ago       Running             kube-proxy                  1                   c2b213aa0926c       kube-proxy-zl7gg                                       kube-system
	9f8260c3ac833       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago       Exited              storage-provisioner         1                   32b4320e94207       storage-provisioner                                    kube-system
	d94f016d6a660       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   68e55d76e792f       busybox                                                default
	69016b9f99d77       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   2c33b7579b455       kube-scheduler-default-k8s-diff-port-997276            kube-system
	d55af54186cd2       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   80a9efa5a960f       kube-controller-manager-default-k8s-diff-port-997276   kube-system
	e8361e8e1ee06       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   1f227176debee       etcd-default-k8s-diff-port-997276                      kube-system
	3ccfad32d3fc7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   721ffe017d46c       kube-apiserver-default-k8s-diff-port-997276            kube-system
	
	
	==> coredns [d7f4a4e3f96ad1311da2b4b0839dfdde33ceb36b5b11c73ecf55feda067baebb] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49702 - 22719 "HINFO IN 6995182671490730965.1443162241631516290. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013380802s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-997276
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-997276
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81
	                    minikube.k8s.io/name=default-k8s-diff-port-997276
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_06T19_55_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 06 Oct 2025 19:55:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-997276
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 06 Oct 2025 19:58:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 06 Oct 2025 19:58:00 +0000   Mon, 06 Oct 2025 19:55:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 06 Oct 2025 19:58:00 +0000   Mon, 06 Oct 2025 19:55:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 06 Oct 2025 19:58:00 +0000   Mon, 06 Oct 2025 19:55:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 06 Oct 2025 19:58:00 +0000   Mon, 06 Oct 2025 19:56:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-997276
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 f613ab9f1dc7456b9fc37f90f2631726
	  System UUID:                4764672c-0e9d-4c30-bf0e-576675527b0d
	  Boot ID:                    28123a58-a718-41b5-bac7-da83ff2d3a0d
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-bns67                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m22s
	  kube-system                 etcd-default-k8s-diff-port-997276                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m28s
	  kube-system                 kindnet-twtwt                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m22s
	  kube-system                 kube-apiserver-default-k8s-diff-port-997276             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-997276    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-proxy-zl7gg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-scheduler-default-k8s-diff-port-997276             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vtgzs              0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-l9n6g                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m20s                  kube-proxy       
	  Normal   Starting                 51s                    kube-proxy       
	  Warning  CgroupV1                 2m36s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m36s (x8 over 2m36s)  kubelet          Node default-k8s-diff-port-997276 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m36s (x8 over 2m36s)  kubelet          Node default-k8s-diff-port-997276 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m36s (x8 over 2m36s)  kubelet          Node default-k8s-diff-port-997276 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m28s                  kubelet          Node default-k8s-diff-port-997276 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m28s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m28s                  kubelet          Node default-k8s-diff-port-997276 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m28s                  kubelet          Node default-k8s-diff-port-997276 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m28s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m23s                  node-controller  Node default-k8s-diff-port-997276 event: Registered Node default-k8s-diff-port-997276 in Controller
	  Normal   NodeReady                101s                   kubelet          Node default-k8s-diff-port-997276 status is now: NodeReady
	  Normal   Starting                 65s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 65s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-997276 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-997276 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-997276 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           51s                    node-controller  Node default-k8s-diff-port-997276 event: Registered Node default-k8s-diff-port-997276 in Controller
	
	
	==> dmesg <==
	[Oct 6 19:27] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:28] overlayfs: idmapped layers are currently not supported
	[ +38.864395] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:29] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:30] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:31] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:33] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:35] overlayfs: idmapped layers are currently not supported
	[ +30.685217] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:37] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:39] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:41] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:48] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:49] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:50] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:52] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:54] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:55] overlayfs: idmapped layers are currently not supported
	[ +29.761517] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:56] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:57] overlayfs: idmapped layers are currently not supported
	[  +2.641672] overlayfs: idmapped layers are currently not supported
	[Oct 6 19:58] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e8361e8e1ee0632342d163d9478d496057250b3eeebfb0c28f6c1c0c8879ec24] <==
	{"level":"warn","ts":"2025-10-06T19:57:26.394070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.436168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.480796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.505019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.538781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.591165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.635816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.678894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.713865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.787224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.815452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.870827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.911995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.951265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:26.988395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.013692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.038409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.071970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.113706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.151102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.226545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.249875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.281747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.325055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T19:57:27.655820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53158","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:58:26 up  1:40,  0 user,  load average: 6.96, 3.99, 2.60
	Linux default-k8s-diff-port-997276 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7b0a7293bb03120d2db1b7f8559bcd7af73005b9a4cb29e752ff4eff8a6184d5] <==
	I1006 19:57:32.662420       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1006 19:57:32.706878       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1006 19:57:32.707578       1 main.go:148] setting mtu 1500 for CNI 
	I1006 19:57:32.707628       1 main.go:178] kindnetd IP family: "ipv4"
	I1006 19:57:32.707667       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-06T19:57:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1006 19:57:32.896438       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1006 19:57:32.896467       1 controller.go:381] "Waiting for informer caches to sync"
	I1006 19:57:32.896476       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1006 19:57:32.949409       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1006 19:58:02.897585       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1006 19:58:02.897750       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1006 19:58:02.949248       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1006 19:58:02.949354       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1006 19:58:04.475207       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1006 19:58:04.475253       1 metrics.go:72] Registering metrics
	I1006 19:58:04.475356       1 controller.go:711] "Syncing nftables rules"
	I1006 19:58:12.897491       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1006 19:58:12.897564       1 main.go:301] handling current node
	I1006 19:58:22.899784       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1006 19:58:22.899820       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3ccfad32d3fc760df3c3a85997578af79aef13072ed1d94e610c623e692996c5] <==
	I1006 19:57:29.969598       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1006 19:57:29.999421       1 aggregator.go:171] initial CRD sync complete...
	I1006 19:57:29.999446       1 autoregister_controller.go:144] Starting autoregister controller
	I1006 19:57:29.999454       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1006 19:57:29.999461       1 cache.go:39] Caches are synced for autoregister controller
	I1006 19:57:30.027759       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1006 19:57:30.028234       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1006 19:57:30.028279       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1006 19:57:30.028349       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1006 19:57:30.057573       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1006 19:57:30.103962       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1006 19:57:30.103993       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1006 19:57:30.106659       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1006 19:57:30.153052       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1006 19:57:30.174053       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1006 19:57:30.410483       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1006 19:57:32.565599       1 controller.go:667] quota admission added evaluator for: namespaces
	I1006 19:57:33.032833       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1006 19:57:33.273158       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1006 19:57:33.380238       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1006 19:57:33.669214       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.156.166"}
	I1006 19:57:33.695324       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.144.221"}
	I1006 19:57:35.327097       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1006 19:57:35.675102       1 controller.go:667] quota admission added evaluator for: endpoints
	I1006 19:57:35.772719       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d55af54186cd2219cd4bae48a2441c417921efdadcb8fc6384bac72028d350db] <==
	I1006 19:57:35.308606       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1006 19:57:35.313246       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1006 19:57:35.313369       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 19:57:35.313378       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1006 19:57:35.313385       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1006 19:57:35.314313       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1006 19:57:35.314393       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1006 19:57:35.314622       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1006 19:57:35.314638       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1006 19:57:35.316064       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1006 19:57:35.327858       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1006 19:57:35.332685       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1006 19:57:35.335882       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1006 19:57:35.336345       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1006 19:57:35.342397       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1006 19:57:35.343783       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1006 19:57:35.344623       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 19:57:35.346481       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1006 19:57:35.349755       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1006 19:57:35.355791       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1006 19:57:35.356348       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1006 19:57:35.367121       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1006 19:57:35.367231       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1006 19:57:35.371642       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1006 19:57:35.374501       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	
	
	==> kube-proxy [021f8a3c8caf4e3062528b2830e6d3bd81c80b253a054789688df77fd6566fa9] <==
	I1006 19:57:33.717827       1 server_linux.go:53] "Using iptables proxy"
	I1006 19:57:33.852865       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1006 19:57:33.959779       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1006 19:57:33.959812       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1006 19:57:33.959889       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1006 19:57:34.174567       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 19:57:34.174624       1 server_linux.go:132] "Using iptables Proxier"
	I1006 19:57:34.183166       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1006 19:57:34.183475       1 server.go:527] "Version info" version="v1.34.1"
	I1006 19:57:34.183500       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:57:34.184926       1 config.go:200] "Starting service config controller"
	I1006 19:57:34.184947       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1006 19:57:34.184983       1 config.go:106] "Starting endpoint slice config controller"
	I1006 19:57:34.184988       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1006 19:57:34.184998       1 config.go:403] "Starting serviceCIDR config controller"
	I1006 19:57:34.185001       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1006 19:57:34.185654       1 config.go:309] "Starting node config controller"
	I1006 19:57:34.185670       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1006 19:57:34.185677       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1006 19:57:34.285407       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1006 19:57:34.285423       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1006 19:57:34.285455       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [69016b9f99d774f122c6a833714dc8aa9a9bed369630bb89cab483dc73bb2308] <==
	I1006 19:57:27.301688       1 serving.go:386] Generated self-signed cert in-memory
	I1006 19:57:33.039184       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1006 19:57:33.039344       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 19:57:33.084320       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1006 19:57:33.084435       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1006 19:57:33.084478       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1006 19:57:33.084510       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1006 19:57:33.085379       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:57:33.085413       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 19:57:33.085565       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 19:57:33.085591       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 19:57:33.197726       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1006 19:57:33.206004       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 19:57:33.206906       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 06 19:57:31 default-k8s-diff-port-997276 kubelet[773]: W1006 19:57:31.988608     773 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4fc3831db94848cd06748cc5e8c8533f2de0215c11120be2aa7b676a4a1c113b/crio-d91653a2873a4cb409b63531895bf6398dfd971f42632321639bed92b14996d2 WatchSource:0}: Error finding container d91653a2873a4cb409b63531895bf6398dfd971f42632321639bed92b14996d2: Status 404 returned error can't find the container with id d91653a2873a4cb409b63531895bf6398dfd971f42632321639bed92b14996d2
	Oct 06 19:57:35 default-k8s-diff-port-997276 kubelet[773]: I1006 19:57:35.933401     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/47bee36e-d532-4335-89c4-581d78beb60b-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-vtgzs\" (UID: \"47bee36e-d532-4335-89c4-581d78beb60b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vtgzs"
	Oct 06 19:57:35 default-k8s-diff-port-997276 kubelet[773]: I1006 19:57:35.933449     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8xfs\" (UniqueName: \"kubernetes.io/projected/6043dfa7-271b-4e1a-be38-b574fee8ce17-kube-api-access-n8xfs\") pod \"kubernetes-dashboard-855c9754f9-l9n6g\" (UID: \"6043dfa7-271b-4e1a-be38-b574fee8ce17\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l9n6g"
	Oct 06 19:57:35 default-k8s-diff-port-997276 kubelet[773]: I1006 19:57:35.933475     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj66x\" (UniqueName: \"kubernetes.io/projected/47bee36e-d532-4335-89c4-581d78beb60b-kube-api-access-nj66x\") pod \"dashboard-metrics-scraper-6ffb444bf9-vtgzs\" (UID: \"47bee36e-d532-4335-89c4-581d78beb60b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vtgzs"
	Oct 06 19:57:35 default-k8s-diff-port-997276 kubelet[773]: I1006 19:57:35.933493     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6043dfa7-271b-4e1a-be38-b574fee8ce17-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-l9n6g\" (UID: \"6043dfa7-271b-4e1a-be38-b574fee8ce17\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l9n6g"
	Oct 06 19:57:36 default-k8s-diff-port-997276 kubelet[773]: W1006 19:57:36.249330     773 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4fc3831db94848cd06748cc5e8c8533f2de0215c11120be2aa7b676a4a1c113b/crio-142a5ae69a029c6c56165d4e5486b54ec123e924b46d886cb03084b7334126cb WatchSource:0}: Error finding container 142a5ae69a029c6c56165d4e5486b54ec123e924b46d886cb03084b7334126cb: Status 404 returned error can't find the container with id 142a5ae69a029c6c56165d4e5486b54ec123e924b46d886cb03084b7334126cb
	Oct 06 19:57:50 default-k8s-diff-port-997276 kubelet[773]: I1006 19:57:50.527534     773 scope.go:117] "RemoveContainer" containerID="6ca18da481f6dff47ce3169c15af45d04f8c51025949d3f11010971a81694c77"
	Oct 06 19:57:50 default-k8s-diff-port-997276 kubelet[773]: I1006 19:57:50.586472     773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l9n6g" podStartSLOduration=8.450800009 podStartE2EDuration="15.586438494s" podCreationTimestamp="2025-10-06 19:57:35 +0000 UTC" firstStartedPulling="2025-10-06 19:57:36.23586088 +0000 UTC m=+15.435737179" lastFinishedPulling="2025-10-06 19:57:43.371499357 +0000 UTC m=+22.571375664" observedRunningTime="2025-10-06 19:57:43.559784431 +0000 UTC m=+22.759660754" watchObservedRunningTime="2025-10-06 19:57:50.586438494 +0000 UTC m=+29.786314793"
	Oct 06 19:57:51 default-k8s-diff-port-997276 kubelet[773]: I1006 19:57:51.538447     773 scope.go:117] "RemoveContainer" containerID="6ca18da481f6dff47ce3169c15af45d04f8c51025949d3f11010971a81694c77"
	Oct 06 19:57:51 default-k8s-diff-port-997276 kubelet[773]: I1006 19:57:51.539419     773 scope.go:117] "RemoveContainer" containerID="21836af816c1af0c04772a3c5e0ed0f8fad0798d185ed7307bbd34590099f4ed"
	Oct 06 19:57:51 default-k8s-diff-port-997276 kubelet[773]: E1006 19:57:51.547018     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vtgzs_kubernetes-dashboard(47bee36e-d532-4335-89c4-581d78beb60b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vtgzs" podUID="47bee36e-d532-4335-89c4-581d78beb60b"
	Oct 06 19:57:52 default-k8s-diff-port-997276 kubelet[773]: I1006 19:57:52.538013     773 scope.go:117] "RemoveContainer" containerID="21836af816c1af0c04772a3c5e0ed0f8fad0798d185ed7307bbd34590099f4ed"
	Oct 06 19:57:52 default-k8s-diff-port-997276 kubelet[773]: E1006 19:57:52.538182     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vtgzs_kubernetes-dashboard(47bee36e-d532-4335-89c4-581d78beb60b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vtgzs" podUID="47bee36e-d532-4335-89c4-581d78beb60b"
	Oct 06 19:57:56 default-k8s-diff-port-997276 kubelet[773]: I1006 19:57:56.192728     773 scope.go:117] "RemoveContainer" containerID="21836af816c1af0c04772a3c5e0ed0f8fad0798d185ed7307bbd34590099f4ed"
	Oct 06 19:57:56 default-k8s-diff-port-997276 kubelet[773]: E1006 19:57:56.192957     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vtgzs_kubernetes-dashboard(47bee36e-d532-4335-89c4-581d78beb60b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vtgzs" podUID="47bee36e-d532-4335-89c4-581d78beb60b"
	Oct 06 19:58:03 default-k8s-diff-port-997276 kubelet[773]: I1006 19:58:03.580976     773 scope.go:117] "RemoveContainer" containerID="9f8260c3ac833cb980660b518720852ae3a00d4314fd06714ca37ecb0e765aa1"
	Oct 06 19:58:10 default-k8s-diff-port-997276 kubelet[773]: I1006 19:58:10.235178     773 scope.go:117] "RemoveContainer" containerID="21836af816c1af0c04772a3c5e0ed0f8fad0798d185ed7307bbd34590099f4ed"
	Oct 06 19:58:10 default-k8s-diff-port-997276 kubelet[773]: I1006 19:58:10.602344     773 scope.go:117] "RemoveContainer" containerID="21836af816c1af0c04772a3c5e0ed0f8fad0798d185ed7307bbd34590099f4ed"
	Oct 06 19:58:10 default-k8s-diff-port-997276 kubelet[773]: I1006 19:58:10.603400     773 scope.go:117] "RemoveContainer" containerID="f23c7b1e4e02b9f141c926732d0f2535e9f3243ba0bee45a42e1d11d5e2c978c"
	Oct 06 19:58:10 default-k8s-diff-port-997276 kubelet[773]: E1006 19:58:10.607136     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vtgzs_kubernetes-dashboard(47bee36e-d532-4335-89c4-581d78beb60b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vtgzs" podUID="47bee36e-d532-4335-89c4-581d78beb60b"
	Oct 06 19:58:16 default-k8s-diff-port-997276 kubelet[773]: I1006 19:58:16.192437     773 scope.go:117] "RemoveContainer" containerID="f23c7b1e4e02b9f141c926732d0f2535e9f3243ba0bee45a42e1d11d5e2c978c"
	Oct 06 19:58:16 default-k8s-diff-port-997276 kubelet[773]: E1006 19:58:16.193140     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vtgzs_kubernetes-dashboard(47bee36e-d532-4335-89c4-581d78beb60b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vtgzs" podUID="47bee36e-d532-4335-89c4-581d78beb60b"
	Oct 06 19:58:19 default-k8s-diff-port-997276 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 06 19:58:19 default-k8s-diff-port-997276 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 06 19:58:19 default-k8s-diff-port-997276 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [4e61e37ab2416491e37ee9385b1a5a53f023ad0488c291c5826cce8b6d887514] <==
	2025/10/06 19:57:43 Using namespace: kubernetes-dashboard
	2025/10/06 19:57:43 Using in-cluster config to connect to apiserver
	2025/10/06 19:57:43 Using secret token for csrf signing
	2025/10/06 19:57:43 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/06 19:57:43 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/06 19:57:43 Successful initial request to the apiserver, version: v1.34.1
	2025/10/06 19:57:43 Generating JWE encryption key
	2025/10/06 19:57:43 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/06 19:57:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/06 19:57:44 Initializing JWE encryption key from synchronized object
	2025/10/06 19:57:44 Creating in-cluster Sidecar client
	2025/10/06 19:57:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/06 19:57:44 Serving insecurely on HTTP port: 9090
	2025/10/06 19:58:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/06 19:57:43 Starting overwatch
	
	
	==> storage-provisioner [94543004a692d2869152c4706f66a2c0539fd2fc422c7b5955c377fa34be7e27] <==
	I1006 19:58:03.656937       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1006 19:58:03.677389       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1006 19:58:03.677521       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1006 19:58:03.680546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:58:07.135958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:58:11.397479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:58:14.995672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:58:18.049367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:58:21.071812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:58:21.079193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1006 19:58:21.079415       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1006 19:58:21.080251       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6aac352d-9443-44be-81f2-135d3c658690", APIVersion:"v1", ResourceVersion:"689", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-997276_6f74d579-0b8f-494a-ae0a-c1787b1a7e8a became leader
	W1006 19:58:21.083003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1006 19:58:21.083293       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-997276_6f74d579-0b8f-494a-ae0a-c1787b1a7e8a!
	W1006 19:58:21.092582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1006 19:58:21.183877       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-997276_6f74d579-0b8f-494a-ae0a-c1787b1a7e8a!
	W1006 19:58:23.098777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:58:23.108250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:58:25.121397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 19:58:25.134134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [9f8260c3ac833cb980660b518720852ae3a00d4314fd06714ca37ecb0e765aa1] <==
	I1006 19:57:32.949471       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1006 19:58:03.026247       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-997276 -n default-k8s-diff-port-997276
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-997276 -n default-k8s-diff-port-997276: exit status 2 (396.124435ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-997276 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (8.50s)
E1006 20:04:09.248022    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 20:04:09.409609    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 20:04:09.731283    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 20:04:10.372722    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 20:04:11.654151    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 20:04:13.722441    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/no-preload-314275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 20:04:14.216121    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (258/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 39.28
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 37.08
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 178.36
31 TestAddons/serial/GCPAuth/Namespaces 0.24
32 TestAddons/serial/GCPAuth/FakeCredentials 8.83
48 TestAddons/StoppedEnableDisable 12.53
49 TestCertOptions 32.84
50 TestCertExpiration 331.19
59 TestErrorSpam/setup 30.52
60 TestErrorSpam/start 0.76
61 TestErrorSpam/status 1.06
62 TestErrorSpam/pause 6.49
63 TestErrorSpam/unpause 6.36
64 TestErrorSpam/stop 1.42
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 80.65
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 29.17
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.12
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.4
76 TestFunctional/serial/CacheCmd/cache/add_local 1.09
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.84
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.14
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
84 TestFunctional/serial/ExtraConfig 51.91
85 TestFunctional/serial/ComponentHealth 0.11
86 TestFunctional/serial/LogsCmd 1.45
87 TestFunctional/serial/LogsFileCmd 1.49
88 TestFunctional/serial/InvalidService 4.4
90 TestFunctional/parallel/ConfigCmd 0.43
91 TestFunctional/parallel/DashboardCmd 13.27
92 TestFunctional/parallel/DryRun 0.62
93 TestFunctional/parallel/InternationalLanguage 0.27
94 TestFunctional/parallel/StatusCmd 1.24
99 TestFunctional/parallel/AddonsCmd 0.17
100 TestFunctional/parallel/PersistentVolumeClaim 24.8
102 TestFunctional/parallel/SSHCmd 0.72
103 TestFunctional/parallel/CpCmd 2.56
105 TestFunctional/parallel/FileSync 0.33
106 TestFunctional/parallel/CertSync 1.71
110 TestFunctional/parallel/NodeLabels 0.08
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.75
114 TestFunctional/parallel/License 0.31
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.74
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.46
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
128 TestFunctional/parallel/ProfileCmd/profile_list 0.4
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
130 TestFunctional/parallel/MountCmd/any-port 6.74
131 TestFunctional/parallel/MountCmd/specific-port 2.05
132 TestFunctional/parallel/MountCmd/VerifyCleanup 2.09
133 TestFunctional/parallel/ServiceCmd/List 0.64
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.61
135 TestFunctional/parallel/Version/short 0.1
136 TestFunctional/parallel/Version/components 1.33
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
144 TestFunctional/parallel/ImageCommands/ImageBuild 4.08
145 TestFunctional/parallel/ImageCommands/Setup 0.64
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
153 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
154 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
155 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 193.17
164 TestMultiControlPlane/serial/DeployApp 8.08
165 TestMultiControlPlane/serial/PingHostFromPods 1.47
166 TestMultiControlPlane/serial/AddWorkerNode 59.28
167 TestMultiControlPlane/serial/NodeLabels 0.1
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.03
169 TestMultiControlPlane/serial/CopyFile 19.15
170 TestMultiControlPlane/serial/StopSecondaryNode 12.77
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.81
172 TestMultiControlPlane/serial/RestartSecondaryNode 29.51
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 156.95
175 TestMultiControlPlane/serial/DeleteSecondaryNode 12.03
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.81
177 TestMultiControlPlane/serial/StopCluster 35.52
178 TestMultiControlPlane/serial/RestartCluster 89.53
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
180 TestMultiControlPlane/serial/AddSecondaryNode 79.59
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.05
185 TestJSONOutput/start/Command 82.24
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.71
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.23
210 TestKicCustomNetwork/create_custom_network 44.31
211 TestKicCustomNetwork/use_default_bridge_network 40.09
212 TestKicExistingNetwork 36.79
213 TestKicCustomSubnet 35.54
214 TestKicStaticIP 34.52
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 68.32
219 TestMountStart/serial/StartWithMountFirst 9.11
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 9.68
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.63
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.22
226 TestMountStart/serial/RestartStopped 8.25
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 141.59
231 TestMultiNode/serial/DeployApp2Nodes 5.21
232 TestMultiNode/serial/PingHostFrom2Pods 0.92
233 TestMultiNode/serial/AddNode 60.54
234 TestMultiNode/serial/MultiNodeLabels 0.1
235 TestMultiNode/serial/ProfileList 0.69
236 TestMultiNode/serial/CopyFile 10.51
237 TestMultiNode/serial/StopNode 2.29
238 TestMultiNode/serial/StartAfterStop 7.87
239 TestMultiNode/serial/RestartKeepsNodes 74.06
240 TestMultiNode/serial/DeleteNode 5.57
241 TestMultiNode/serial/StopMultiNode 23.82
242 TestMultiNode/serial/RestartMultiNode 52.45
243 TestMultiNode/serial/ValidateNameConflict 34.86
248 TestPreload 124.28
250 TestScheduledStopUnix 110.45
253 TestInsufficientStorage 14.38
254 TestRunningBinaryUpgrade 59.12
256 TestKubernetesUpgrade 363.44
257 TestMissingContainerUpgrade 117.52
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 45.65
261 TestNoKubernetes/serial/StartWithStopK8s 8.01
262 TestNoKubernetes/serial/Start 9.93
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.37
264 TestNoKubernetes/serial/ProfileList 1.93
265 TestNoKubernetes/serial/Stop 1.31
266 TestNoKubernetes/serial/StartNoArgs 6.96
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
268 TestStoppedBinaryUpgrade/Setup 2.78
269 TestStoppedBinaryUpgrade/Upgrade 60.56
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.17
279 TestPause/serial/Start 79.22
280 TestPause/serial/SecondStartNoReconfiguration 27.43
289 TestNetworkPlugins/group/false 3.85
294 TestStartStop/group/old-k8s-version/serial/FirstStart 59.18
295 TestStartStop/group/old-k8s-version/serial/DeployApp 9.43
297 TestStartStop/group/old-k8s-version/serial/Stop 11.9
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
299 TestStartStop/group/old-k8s-version/serial/SecondStart 47.06
300 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
301 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
302 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
305 TestStartStop/group/no-preload/serial/FirstStart 76.93
307 TestStartStop/group/embed-certs/serial/FirstStart 83.46
308 TestStartStop/group/no-preload/serial/DeployApp 9.45
310 TestStartStop/group/no-preload/serial/Stop 12.15
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
312 TestStartStop/group/no-preload/serial/SecondStart 50.39
313 TestStartStop/group/embed-certs/serial/DeployApp 9.33
315 TestStartStop/group/embed-certs/serial/Stop 12.23
316 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.14
318 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.32
319 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
321 TestStartStop/group/embed-certs/serial/SecondStart 53.53
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 84.53
324 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
325 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
326 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
329 TestStartStop/group/newest-cni/serial/FirstStart 39.76
330 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.5
332 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.22
333 TestStartStop/group/newest-cni/serial/DeployApp 0
335 TestStartStop/group/newest-cni/serial/Stop 1.22
336 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
337 TestStartStop/group/newest-cni/serial/SecondStart 22.02
338 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
339 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 55.03
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.39
344 TestNetworkPlugins/group/auto/Start 84.7
345 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
346 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
349 TestNetworkPlugins/group/kindnet/Start 85
350 TestNetworkPlugins/group/auto/KubeletFlags 0.36
351 TestNetworkPlugins/group/auto/NetCatPod 12.35
352 TestNetworkPlugins/group/auto/DNS 0.16
353 TestNetworkPlugins/group/auto/Localhost 0.13
354 TestNetworkPlugins/group/auto/HairPin 0.15
355 TestNetworkPlugins/group/calico/Start 73.1
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.02
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.82
358 TestNetworkPlugins/group/kindnet/NetCatPod 11.6
359 TestNetworkPlugins/group/kindnet/DNS 0.23
360 TestNetworkPlugins/group/kindnet/Localhost 0.16
361 TestNetworkPlugins/group/kindnet/HairPin 0.22
362 TestNetworkPlugins/group/custom-flannel/Start 63.7
363 TestNetworkPlugins/group/calico/ControllerPod 5.03
364 TestNetworkPlugins/group/calico/KubeletFlags 0.55
365 TestNetworkPlugins/group/calico/NetCatPod 11.32
366 TestNetworkPlugins/group/calico/DNS 0.22
367 TestNetworkPlugins/group/calico/Localhost 0.15
368 TestNetworkPlugins/group/calico/HairPin 0.17
369 TestNetworkPlugins/group/enable-default-cni/Start 77.6
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.27
372 TestNetworkPlugins/group/custom-flannel/DNS 0.24
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
375 TestNetworkPlugins/group/flannel/Start 63.53
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.31
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
381 TestNetworkPlugins/group/flannel/ControllerPod 6.01
382 TestNetworkPlugins/group/bridge/Start 48.68
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.43
384 TestNetworkPlugins/group/flannel/NetCatPod 12.39
385 TestNetworkPlugins/group/flannel/DNS 0.22
386 TestNetworkPlugins/group/flannel/Localhost 0.16
387 TestNetworkPlugins/group/flannel/HairPin 0.21
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
389 TestNetworkPlugins/group/bridge/NetCatPod 11.26
390 TestNetworkPlugins/group/bridge/DNS 0.16
391 TestNetworkPlugins/group/bridge/Localhost 0.14
392 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (39.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-612821 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-612821 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (39.278492361s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (39.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1006 18:42:09.799511    4350 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1006 18:42:09.799591    4350 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-612821
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-612821: exit status 85 (64.892287ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-612821 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-612821 │ jenkins │ v1.37.0 │ 06 Oct 25 18:41 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 18:41:30
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 18:41:30.583787    4356 out.go:360] Setting OutFile to fd 1 ...
	I1006 18:41:30.583897    4356 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:41:30.583951    4356 out.go:374] Setting ErrFile to fd 2...
	I1006 18:41:30.583956    4356 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:41:30.584230    4356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	W1006 18:41:30.584367    4356 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21701-2540/.minikube/config/config.json: open /home/jenkins/minikube-integration/21701-2540/.minikube/config/config.json: no such file or directory
	I1006 18:41:30.584752    4356 out.go:368] Setting JSON to true
	I1006 18:41:30.585613    4356 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1426,"bootTime":1759774665,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1006 18:41:30.585680    4356 start.go:140] virtualization:  
	I1006 18:41:30.589933    4356 out.go:99] [download-only-612821] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1006 18:41:30.590111    4356 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball: no such file or directory
	I1006 18:41:30.590219    4356 notify.go:220] Checking for updates...
	I1006 18:41:30.593876    4356 out.go:171] MINIKUBE_LOCATION=21701
	I1006 18:41:30.597115    4356 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 18:41:30.599920    4356 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 18:41:30.603003    4356 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	I1006 18:41:30.605907    4356 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1006 18:41:30.611580    4356 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1006 18:41:30.612027    4356 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 18:41:30.658897    4356 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 18:41:30.659038    4356 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 18:41:31.084172    4356 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-06 18:41:31.074349582 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 18:41:31.084279    4356 docker.go:318] overlay module found
	I1006 18:41:31.087394    4356 out.go:99] Using the docker driver based on user configuration
	I1006 18:41:31.087444    4356 start.go:304] selected driver: docker
	I1006 18:41:31.087452    4356 start.go:924] validating driver "docker" against <nil>
	I1006 18:41:31.087570    4356 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 18:41:31.149868    4356 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-06 18:41:31.139927518 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 18:41:31.150029    4356 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 18:41:31.150340    4356 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1006 18:41:31.150519    4356 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1006 18:41:31.153725    4356 out.go:171] Using Docker driver with root privileges
	I1006 18:41:31.156781    4356 cni.go:84] Creating CNI manager for ""
	I1006 18:41:31.156867    4356 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 18:41:31.156881    4356 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 18:41:31.156966    4356 start.go:348] cluster config:
	{Name:download-only-612821 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-612821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 18:41:31.160190    4356 out.go:99] Starting "download-only-612821" primary control-plane node in "download-only-612821" cluster
	I1006 18:41:31.160227    4356 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 18:41:31.163124    4356 out.go:99] Pulling base image v0.0.48-1759382731-21643 ...
	I1006 18:41:31.163152    4356 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1006 18:41:31.163304    4356 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 18:41:31.181808    4356 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1006 18:41:31.181986    4356 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1006 18:41:31.182098    4356 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1006 18:41:31.228074    4356 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1006 18:41:31.228105    4356 cache.go:58] Caching tarball of preloaded images
	I1006 18:41:31.228252    4356 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1006 18:41:31.231601    4356 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1006 18:41:31.231634    4356 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1006 18:41:31.322411    4356 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1006 18:41:31.322542    4356 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1006 18:41:35.767806    4356 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	
	
	* The control-plane node download-only-612821 host does not exist
	  To start a cluster, run: "minikube start -p download-only-612821"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-612821
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (37.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-652012 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-652012 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (37.082061262s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (37.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1006 18:42:47.297040    4350 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1006 18:42:47.297077    4350 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-652012
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-652012: exit status 85 (83.857282ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-612821 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-612821 │ jenkins │ v1.37.0 │ 06 Oct 25 18:41 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 06 Oct 25 18:42 UTC │ 06 Oct 25 18:42 UTC │
	│ delete  │ -p download-only-612821                                                                                                                                                   │ download-only-612821 │ jenkins │ v1.37.0 │ 06 Oct 25 18:42 UTC │ 06 Oct 25 18:42 UTC │
	│ start   │ -o=json --download-only -p download-only-652012 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-652012 │ jenkins │ v1.37.0 │ 06 Oct 25 18:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 18:42:10
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 18:42:10.256003    4554 out.go:360] Setting OutFile to fd 1 ...
	I1006 18:42:10.256108    4554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:42:10.256119    4554 out.go:374] Setting ErrFile to fd 2...
	I1006 18:42:10.256124    4554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 18:42:10.256416    4554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 18:42:10.256797    4554 out.go:368] Setting JSON to true
	I1006 18:42:10.257492    4554 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1466,"bootTime":1759774665,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1006 18:42:10.257554    4554 start.go:140] virtualization:  
	I1006 18:42:10.259088    4554 out.go:99] [download-only-652012] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 18:42:10.259362    4554 notify.go:220] Checking for updates...
	I1006 18:42:10.260652    4554 out.go:171] MINIKUBE_LOCATION=21701
	I1006 18:42:10.261842    4554 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 18:42:10.263219    4554 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 18:42:10.264520    4554 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	I1006 18:42:10.265805    4554 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1006 18:42:10.268171    4554 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1006 18:42:10.268408    4554 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 18:42:10.289729    4554 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 18:42:10.289857    4554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 18:42:10.368452    4554 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-06 18:42:10.359827265 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 18:42:10.368564    4554 docker.go:318] overlay module found
	I1006 18:42:10.369990    4554 out.go:99] Using the docker driver based on user configuration
	I1006 18:42:10.370023    4554 start.go:304] selected driver: docker
	I1006 18:42:10.370032    4554 start.go:924] validating driver "docker" against <nil>
	I1006 18:42:10.370135    4554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 18:42:10.434801    4554 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-06 18:42:10.425922456 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 18:42:10.434957    4554 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 18:42:10.435241    4554 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1006 18:42:10.435383    4554 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1006 18:42:10.436704    4554 out.go:171] Using Docker driver with root privileges
	I1006 18:42:10.437865    4554 cni.go:84] Creating CNI manager for ""
	I1006 18:42:10.437935    4554 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 18:42:10.437948    4554 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 18:42:10.438023    4554 start.go:348] cluster config:
	{Name:download-only-652012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-652012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 18:42:10.439489    4554 out.go:99] Starting "download-only-652012" primary control-plane node in "download-only-652012" cluster
	I1006 18:42:10.439514    4554 cache.go:123] Beginning downloading kic base image for docker with crio
	I1006 18:42:10.440725    4554 out.go:99] Pulling base image v0.0.48-1759382731-21643 ...
	I1006 18:42:10.440761    4554 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 18:42:10.440917    4554 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 18:42:10.457198    4554 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1006 18:42:10.457335    4554 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1006 18:42:10.457366    4554 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1006 18:42:10.457372    4554 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1006 18:42:10.457380    4554 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1006 18:42:10.500740    4554 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1006 18:42:10.500777    4554 cache.go:58] Caching tarball of preloaded images
	I1006 18:42:10.500953    4554 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1006 18:42:10.502469    4554 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1006 18:42:10.502500    4554 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1006 18:42:10.587904    4554 preload.go:290] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1006 18:42:10.587955    4554 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21701-2540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-652012 host does not exist
	  To start a cluster, run: "minikube start -p download-only-652012"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-652012
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1006 18:42:48.469340    4350 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-895506 --alsologtostderr --binary-mirror http://127.0.0.1:38653 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-895506" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-895506
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-442328
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-442328: exit status 85 (78.660584ms)

                                                
                                                
-- stdout --
	* Profile "addons-442328" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-442328"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-442328
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-442328: exit status 85 (59.172712ms)

                                                
                                                
-- stdout --
	* Profile "addons-442328" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-442328"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (178.36s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-442328 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-442328 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m58.360222651s)
--- PASS: TestAddons/Setup (178.36s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-442328 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-442328 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.24s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.83s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-442328 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-442328 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ed0705fd-1d38-4f92-9a24-929f35e2a002] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ed0705fd-1d38-4f92-9a24-929f35e2a002] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.005910818s
addons_test.go:694: (dbg) Run:  kubectl --context addons-442328 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-442328 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-442328 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-442328 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.83s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.53s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-442328
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-442328: (12.022630763s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-442328
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-442328
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-442328
--- PASS: TestAddons/StoppedEnableDisable (12.53s)

                                                
                                    
x
+
TestCertOptions (32.84s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-593131 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-593131 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (30.159984537s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-593131 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-593131 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-593131 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-593131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-593131
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-593131: (1.961977896s)
--- PASS: TestCertOptions (32.84s)

                                                
                                    
x
+
TestCertExpiration (331.19s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-585086 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1006 19:47:56.447850    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-585086 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (32.498936292s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-585086 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-585086 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (1m55.890963652s)
helpers_test.go:175: Cleaning up "cert-expiration-585086" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-585086
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-585086: (2.798352214s)
--- PASS: TestCertExpiration (331.19s)

                                                
                                    
x
+
TestErrorSpam/setup (30.52s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-369239 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-369239 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-369239 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-369239 --driver=docker  --container-runtime=crio: (30.516393014s)
--- PASS: TestErrorSpam/setup (30.52s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-369239 --log_dir /tmp/nospam-369239 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-369239 --log_dir /tmp/nospam-369239 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-369239 --log_dir /tmp/nospam-369239 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (1.06s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-369239 --log_dir /tmp/nospam-369239 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-369239 --log_dir /tmp/nospam-369239 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-369239 --log_dir /tmp/nospam-369239 status
--- PASS: TestErrorSpam/status (1.06s)

                                                
                                    
x
+
TestErrorSpam/pause (6.49s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-369239 --log_dir /tmp/nospam-369239 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-369239 --log_dir /tmp/nospam-369239 pause: exit status 80 (2.197550945s)

                                                
                                                
-- stdout --
	* Pausing node nospam-369239 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:49:41Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-369239 --log_dir /tmp/nospam-369239 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-369239 --log_dir /tmp/nospam-369239 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-369239 --log_dir /tmp/nospam-369239 pause: exit status 80 (2.368021735s)

                                                
                                                
-- stdout --
	* Pausing node nospam-369239 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:49:43Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-369239 --log_dir /tmp/nospam-369239 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-369239 --log_dir /tmp/nospam-369239 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-369239 --log_dir /tmp/nospam-369239 pause: exit status 80 (1.924550218s)

                                                
                                                
-- stdout --
	* Pausing node nospam-369239 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:49:45Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-369239 --log_dir /tmp/nospam-369239 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.49s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.36s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-369239 --log_dir /tmp/nospam-369239 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-369239 --log_dir /tmp/nospam-369239 unpause: exit status 80 (2.162670128s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-369239 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:49:47Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-369239 --log_dir /tmp/nospam-369239 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-369239 --log_dir /tmp/nospam-369239 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-369239 --log_dir /tmp/nospam-369239 unpause: exit status 80 (2.165200983s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-369239 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:49:49Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-369239 --log_dir /tmp/nospam-369239 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-369239 --log_dir /tmp/nospam-369239 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-369239 --log_dir /tmp/nospam-369239 unpause: exit status 80 (2.030115847s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-369239 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-06T18:49:51Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-369239 --log_dir /tmp/nospam-369239 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.36s)

                                                
                                    
x
+
TestErrorSpam/stop (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-369239 --log_dir /tmp/nospam-369239 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-369239 --log_dir /tmp/nospam-369239 stop: (1.218450849s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-369239 --log_dir /tmp/nospam-369239 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-369239 --log_dir /tmp/nospam-369239 stop
--- PASS: TestErrorSpam/stop (1.42s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21701-2540/.minikube/files/etc/test/nested/copy/4350/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.65s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-184058 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1006 18:50:48.455039    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 18:50:48.462274    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 18:50:48.474053    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 18:50:48.495578    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 18:50:48.537057    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 18:50:48.618631    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 18:50:48.780150    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 18:50:49.101870    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 18:50:49.744172    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 18:50:51.025856    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 18:50:53.588175    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 18:50:58.709595    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 18:51:08.951802    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-184058 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m20.648690676s)
--- PASS: TestFunctional/serial/StartWithProxy (80.65s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.17s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1006 18:51:18.724940    4350 config.go:182] Loaded profile config "functional-184058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-184058 --alsologtostderr -v=8
E1006 18:51:29.433161    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-184058 --alsologtostderr -v=8: (29.165027101s)
functional_test.go:678: soft start took 29.166862127s for "functional-184058" cluster.
I1006 18:51:47.890351    4350 config.go:182] Loaded profile config "functional-184058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (29.17s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-184058 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-184058 cache add registry.k8s.io/pause:3.1: (1.146266841s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-184058 cache add registry.k8s.io/pause:3.3: (1.166332985s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-184058 cache add registry.k8s.io/pause:latest: (1.084141169s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-184058 /tmp/TestFunctionalserialCacheCmdcacheadd_local4076655606/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 cache add minikube-local-cache-test:functional-184058
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 cache delete minikube-local-cache-test:functional-184058
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-184058
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-184058 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (282.485952ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 kubectl -- --context functional-184058 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-184058 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (51.91s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-184058 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1006 18:52:10.394991    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-184058 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (51.91128306s)
functional_test.go:776: restart took 51.911369978s for "functional-184058" cluster.
I1006 18:52:47.101808    4350 config.go:182] Loaded profile config "functional-184058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (51.91s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-184058 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-184058 logs: (1.45361262s)
--- PASS: TestFunctional/serial/LogsCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 logs --file /tmp/TestFunctionalserialLogsFileCmd1356377619/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-184058 logs --file /tmp/TestFunctionalserialLogsFileCmd1356377619/001/logs.txt: (1.489780044s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.4s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-184058 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-184058
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-184058: exit status 115 (373.992222ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30852 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-184058 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.40s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-184058 config get cpus: exit status 14 (59.418282ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-184058 config get cpus: exit status 14 (68.261756ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-184058 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-184058 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 31435: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.27s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-184058 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-184058 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (241.73326ms)

                                                
                                                
-- stdout --
	* [functional-184058] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 19:03:23.836739   30623 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:03:23.837131   30623 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:03:23.837169   30623 out.go:374] Setting ErrFile to fd 2...
	I1006 19:03:23.837193   30623 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:03:23.837495   30623 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:03:23.838021   30623 out.go:368] Setting JSON to false
	I1006 19:03:23.839885   30623 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":2739,"bootTime":1759774665,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1006 19:03:23.840008   30623 start.go:140] virtualization:  
	I1006 19:03:23.843392   30623 out.go:179] * [functional-184058] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 19:03:23.848133   30623 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 19:03:23.848917   30623 notify.go:220] Checking for updates...
	I1006 19:03:23.860182   30623 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 19:03:23.863392   30623 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:03:23.868031   30623 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	I1006 19:03:23.871091   30623 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 19:03:23.874032   30623 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 19:03:23.877645   30623 config.go:182] Loaded profile config "functional-184058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:03:23.878201   30623 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 19:03:23.913161   30623 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 19:03:23.913347   30623 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:03:23.977308   30623 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:03:23.967684472 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:03:23.977416   30623 docker.go:318] overlay module found
	I1006 19:03:23.980520   30623 out.go:179] * Using the docker driver based on existing profile
	I1006 19:03:23.983580   30623 start.go:304] selected driver: docker
	I1006 19:03:23.983605   30623 start.go:924] validating driver "docker" against &{Name:functional-184058 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-184058 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:03:23.983820   30623 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 19:03:23.987544   30623 out.go:203] 
	W1006 19:03:23.990522   30623 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1006 19:03:23.993534   30623 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-184058 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-184058 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-184058 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (274.403244ms)

                                                
                                                
-- stdout --
	* [functional-184058] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 19:03:23.559847   30524 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:03:23.560187   30524 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:03:23.560213   30524 out.go:374] Setting ErrFile to fd 2...
	I1006 19:03:23.560219   30524 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:03:23.561953   30524 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:03:23.562406   30524 out.go:368] Setting JSON to false
	I1006 19:03:23.563203   30524 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":2739,"bootTime":1759774665,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1006 19:03:23.563266   30524 start.go:140] virtualization:  
	I1006 19:03:23.567389   30524 out.go:179] * [functional-184058] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1006 19:03:23.571793   30524 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 19:03:23.571925   30524 notify.go:220] Checking for updates...
	I1006 19:03:23.578535   30524 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 19:03:23.581716   30524 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:03:23.584979   30524 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	I1006 19:03:23.588061   30524 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 19:03:23.590956   30524 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 19:03:23.594472   30524 config.go:182] Loaded profile config "functional-184058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:03:23.595013   30524 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 19:03:23.644190   30524 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 19:03:23.644320   30524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:03:23.735210   30524 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:03:23.725250957 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:03:23.735322   30524 docker.go:318] overlay module found
	I1006 19:03:23.738534   30524 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1006 19:03:23.741559   30524 start.go:304] selected driver: docker
	I1006 19:03:23.741578   30524 start.go:924] validating driver "docker" against &{Name:functional-184058 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-184058 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 19:03:23.741694   30524 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 19:03:23.745357   30524 out.go:203] 
	W1006 19:03:23.748347   30524 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1006 19:03:23.751283   30524 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [ecf3f65b-ecc0-41fe-911f-ecc7eba033b7] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003529667s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-184058 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-184058 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-184058 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-184058 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [7a135377-1e2c-4179-a84b-a564dd8c3181] Pending
helpers_test.go:352: "sp-pod" [7a135377-1e2c-4179-a84b-a564dd8c3181] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [7a135377-1e2c-4179-a84b-a564dd8c3181] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003603682s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-184058 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-184058 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-184058 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [6a0dec32-b34e-4efb-85a3-6e44a0cee586] Pending
helpers_test.go:352: "sp-pod" [6a0dec32-b34e-4efb-85a3-6e44a0cee586] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00397728s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-184058 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.80s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh -n functional-184058 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 cp functional-184058:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2116690359/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh -n functional-184058 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh -n functional-184058 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4350/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh "sudo cat /etc/test/nested/copy/4350/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4350.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh "sudo cat /etc/ssl/certs/4350.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4350.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh "sudo cat /usr/share/ca-certificates/4350.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/43502.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh "sudo cat /etc/ssl/certs/43502.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/43502.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh "sudo cat /usr/share/ca-certificates/43502.pem"
2025/10/06 19:03:38 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-184058 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-184058 ssh "sudo systemctl is-active docker": exit status 1 (366.42263ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-184058 ssh "sudo systemctl is-active containerd": exit status 1 (384.933157ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-184058 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-184058 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-184058 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 26821: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-184058 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-184058 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-184058 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [6f8e7052-37a0-4f07-b239-67b6a2142aee] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [6f8e7052-37a0-4f07-b239-67b6a2142aee] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003078861s
I1006 18:53:05.903109    4350 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-184058 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.118.186 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-184058 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "347.169848ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "54.270175ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "361.171858ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "56.814794ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-184058 /tmp/TestFunctionalparallelMountCmdany-port1126735794/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759777391055286965" to /tmp/TestFunctionalparallelMountCmdany-port1126735794/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759777391055286965" to /tmp/TestFunctionalparallelMountCmdany-port1126735794/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759777391055286965" to /tmp/TestFunctionalparallelMountCmdany-port1126735794/001/test-1759777391055286965
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-184058 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (343.318304ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1006 19:03:11.399717    4350 retry.go:31] will retry after 388.374272ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  6 19:03 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  6 19:03 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  6 19:03 test-1759777391055286965
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh cat /mount-9p/test-1759777391055286965
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-184058 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [7e84b298-3467-4e3d-b6e2-84a8f136e65c] Pending
helpers_test.go:352: "busybox-mount" [7e84b298-3467-4e3d-b6e2-84a8f136e65c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [7e84b298-3467-4e3d-b6e2-84a8f136e65c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [7e84b298-3467-4e3d-b6e2-84a8f136e65c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.00312469s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-184058 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-184058 /tmp/TestFunctionalparallelMountCmdany-port1126735794/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-184058 /tmp/TestFunctionalparallelMountCmdspecific-port2243156921/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-184058 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (362.789916ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1006 19:03:18.160661    4350 retry.go:31] will retry after 666.342528ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-184058 /tmp/TestFunctionalparallelMountCmdspecific-port2243156921/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-184058 ssh "sudo umount -f /mount-9p": exit status 1 (253.551081ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-184058 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-184058 /tmp/TestFunctionalparallelMountCmdspecific-port2243156921/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-184058 /tmp/TestFunctionalparallelMountCmdVerifyCleanup664967455/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-184058 /tmp/TestFunctionalparallelMountCmdVerifyCleanup664967455/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-184058 /tmp/TestFunctionalparallelMountCmdVerifyCleanup664967455/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-184058 ssh "findmnt -T" /mount1: exit status 1 (610.754932ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1006 19:03:20.458767    4350 retry.go:31] will retry after 603.046383ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-184058 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-184058 /tmp/TestFunctionalparallelMountCmdVerifyCleanup664967455/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-184058 /tmp/TestFunctionalparallelMountCmdVerifyCleanup664967455/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-184058 /tmp/TestFunctionalparallelMountCmdVerifyCleanup664967455/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 service list -o json
functional_test.go:1504: Took "611.110518ms" to run "out/minikube-linux-arm64 -p functional-184058 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-184058 version -o=json --components: (1.329364708s)
--- PASS: TestFunctional/parallel/Version/components (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-184058 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-184058 image ls --format short --alsologtostderr:
I1006 19:03:38.814066   32901 out.go:360] Setting OutFile to fd 1 ...
I1006 19:03:38.814210   32901 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 19:03:38.814231   32901 out.go:374] Setting ErrFile to fd 2...
I1006 19:03:38.814235   32901 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 19:03:38.814502   32901 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
I1006 19:03:38.815110   32901 config.go:182] Loaded profile config "functional-184058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 19:03:38.815229   32901 config.go:182] Loaded profile config "functional-184058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 19:03:38.815677   32901 cli_runner.go:164] Run: docker container inspect functional-184058 --format={{.State.Status}}
I1006 19:03:38.842254   32901 ssh_runner.go:195] Run: systemctl --version
I1006 19:03:38.842313   32901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-184058
I1006 19:03:38.859965   32901 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/functional-184058/id_rsa Username:docker}
I1006 19:03:38.962916   32901 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-184058 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                 │ latest             │ 0777d15d89ece │ 202MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ docker.io/library/nginx                 │ alpine             │ 35f3cbee4fb77 │ 54.3MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-184058 image ls --format table --alsologtostderr:
I1006 19:03:40.238730   33211 out.go:360] Setting OutFile to fd 1 ...
I1006 19:03:40.238964   33211 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 19:03:40.238987   33211 out.go:374] Setting ErrFile to fd 2...
I1006 19:03:40.239030   33211 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 19:03:40.239398   33211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
I1006 19:03:40.240208   33211 config.go:182] Loaded profile config "functional-184058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 19:03:40.240385   33211 config.go:182] Loaded profile config "functional-184058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 19:03:40.240857   33211 cli_runner.go:164] Run: docker container inspect functional-184058 --format={{.State.Status}}
I1006 19:03:40.258906   33211 ssh_runner.go:195] Run: systemctl --version
I1006 19:03:40.258954   33211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-184058
I1006 19:03:40.277428   33211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/functional-184058/id_rsa Username:docker}
I1006 19:03:40.374082   33211 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-184058 image ls --format json --alsologtostderr:
[{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["regist
ry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936","repoDigests":["docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8","docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54348302"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"3d18732f8686cc3c878
055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io
/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"0777d15d89ecedd8739877d62d8983e9f4b081efa23140db06299b0abe7a985b","repoDigests":["docker.io/library/nginx@sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc","docker.io/library/nginx@sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992"],"repoTags":["docker.io/library/nginx:latest"],"size":"202036629"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc
7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a
45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-184058 image ls --format json --alsologtostderr:
I1006 19:03:39.981069   33173 out.go:360] Setting OutFile to fd 1 ...
I1006 19:03:39.981303   33173 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 19:03:39.981330   33173 out.go:374] Setting ErrFile to fd 2...
I1006 19:03:39.981349   33173 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 19:03:39.981603   33173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
I1006 19:03:39.982238   33173 config.go:182] Loaded profile config "functional-184058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 19:03:39.982405   33173 config.go:182] Loaded profile config "functional-184058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 19:03:39.982895   33173 cli_runner.go:164] Run: docker container inspect functional-184058 --format={{.State.Status}}
I1006 19:03:40.002188   33173 ssh_runner.go:195] Run: systemctl --version
I1006 19:03:40.002237   33173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-184058
I1006 19:03:40.051819   33173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/functional-184058/id_rsa Username:docker}
I1006 19:03:40.150679   33173 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-184058 image ls --format yaml --alsologtostderr:
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 0777d15d89ecedd8739877d62d8983e9f4b081efa23140db06299b0abe7a985b
repoDigests:
- docker.io/library/nginx@sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc
- docker.io/library/nginx@sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992
repoTags:
- docker.io/library/nginx:latest
size: "202036629"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936
repoDigests:
- docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8
- docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac
repoTags:
- docker.io/library/nginx:alpine
size: "54348302"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-184058 image ls --format yaml --alsologtostderr:
I1006 19:03:39.056757   32964 out.go:360] Setting OutFile to fd 1 ...
I1006 19:03:39.056930   32964 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 19:03:39.056940   32964 out.go:374] Setting ErrFile to fd 2...
I1006 19:03:39.056944   32964 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 19:03:39.057417   32964 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
I1006 19:03:39.058374   32964 config.go:182] Loaded profile config "functional-184058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 19:03:39.058520   32964 config.go:182] Loaded profile config "functional-184058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 19:03:39.059177   32964 cli_runner.go:164] Run: docker container inspect functional-184058 --format={{.State.Status}}
I1006 19:03:39.080210   32964 ssh_runner.go:195] Run: systemctl --version
I1006 19:03:39.080281   32964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-184058
I1006 19:03:39.101969   32964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/functional-184058/id_rsa Username:docker}
I1006 19:03:39.208158   32964 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-184058 ssh pgrep buildkitd: exit status 1 (303.901114ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 image build -t localhost/my-image:functional-184058 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-184058 image build -t localhost/my-image:functional-184058 testdata/build --alsologtostderr: (3.544103068s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-184058 image build -t localhost/my-image:functional-184058 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 05c20341499
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-184058
--> 2d3a6ec6150
Successfully tagged localhost/my-image:functional-184058
2d3a6ec615059742c915b1b704bf4c786af2549041ef29bc63dc56b7747a0e74
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-184058 image build -t localhost/my-image:functional-184058 testdata/build --alsologtostderr:
I1006 19:03:39.613244   33081 out.go:360] Setting OutFile to fd 1 ...
I1006 19:03:39.613412   33081 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 19:03:39.613436   33081 out.go:374] Setting ErrFile to fd 2...
I1006 19:03:39.613458   33081 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 19:03:39.613845   33081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
I1006 19:03:39.614988   33081 config.go:182] Loaded profile config "functional-184058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 19:03:39.616203   33081 config.go:182] Loaded profile config "functional-184058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1006 19:03:39.617162   33081 cli_runner.go:164] Run: docker container inspect functional-184058 --format={{.State.Status}}
I1006 19:03:39.639045   33081 ssh_runner.go:195] Run: systemctl --version
I1006 19:03:39.639105   33081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-184058
I1006 19:03:39.660876   33081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/functional-184058/id_rsa Username:docker}
I1006 19:03:39.758175   33081 build_images.go:161] Building image from path: /tmp/build.1897857702.tar
I1006 19:03:39.758251   33081 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1006 19:03:39.767893   33081 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1897857702.tar
I1006 19:03:39.772695   33081 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1897857702.tar: stat -c "%s %y" /var/lib/minikube/build/build.1897857702.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1897857702.tar': No such file or directory
I1006 19:03:39.772770   33081 ssh_runner.go:362] scp /tmp/build.1897857702.tar --> /var/lib/minikube/build/build.1897857702.tar (3072 bytes)
I1006 19:03:39.796802   33081 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1897857702
I1006 19:03:39.807101   33081 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1897857702 -xf /var/lib/minikube/build/build.1897857702.tar
I1006 19:03:39.816729   33081 crio.go:315] Building image: /var/lib/minikube/build/build.1897857702
I1006 19:03:39.816800   33081 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-184058 /var/lib/minikube/build/build.1897857702 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1006 19:03:43.070014   33081 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-184058 /var/lib/minikube/build/build.1897857702 --cgroup-manager=cgroupfs: (3.253192427s)
I1006 19:03:43.070085   33081 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1897857702
I1006 19:03:43.078512   33081 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1897857702.tar
I1006 19:03:43.086274   33081 build_images.go:217] Built localhost/my-image:functional-184058 from /tmp/build.1897857702.tar
I1006 19:03:43.086302   33081 build_images.go:133] succeeded building to: functional-184058
I1006 19:03:43.086307   33081 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-184058
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 image rm kicbase/echo-server:functional-184058 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-184058 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-184058
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-184058
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-184058
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (193.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1006 19:05:48.449680    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-626099 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m12.27758689s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (193.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-626099 kubectl -- rollout status deployment/busybox: (5.138547623s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 kubectl -- exec busybox-7b57f96db7-2lkkz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 kubectl -- exec busybox-7b57f96db7-67tpj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 kubectl -- exec busybox-7b57f96db7-vkmgl -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 kubectl -- exec busybox-7b57f96db7-2lkkz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 kubectl -- exec busybox-7b57f96db7-67tpj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 kubectl -- exec busybox-7b57f96db7-vkmgl -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 kubectl -- exec busybox-7b57f96db7-2lkkz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 kubectl -- exec busybox-7b57f96db7-67tpj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 kubectl -- exec busybox-7b57f96db7-vkmgl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 kubectl -- exec busybox-7b57f96db7-2lkkz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 kubectl -- exec busybox-7b57f96db7-2lkkz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 kubectl -- exec busybox-7b57f96db7-67tpj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 kubectl -- exec busybox-7b57f96db7-67tpj -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 kubectl -- exec busybox-7b57f96db7-vkmgl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 kubectl -- exec busybox-7b57f96db7-vkmgl -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 node add --alsologtostderr -v 5
E1006 19:07:11.520492    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:07:56.447206    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:07:56.453825    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:07:56.465126    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:07:56.486482    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:07:56.527849    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:07:56.609219    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:07:56.770715    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:07:57.092390    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:07:57.734197    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:07:59.015874    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:08:01.577416    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:08:06.699245    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-626099 node add --alsologtostderr -v 5: (58.227705053s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-626099 status --alsologtostderr -v 5: (1.050885558s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-626099 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.026397279s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-626099 status --output json --alsologtostderr -v 5: (1.018991012s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 cp testdata/cp-test.txt ha-626099:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 cp ha-626099:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1291059823/001/cp-test_ha-626099.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 cp ha-626099:/home/docker/cp-test.txt ha-626099-m02:/home/docker/cp-test_ha-626099_ha-626099-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099-m02 "sudo cat /home/docker/cp-test_ha-626099_ha-626099-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 cp ha-626099:/home/docker/cp-test.txt ha-626099-m03:/home/docker/cp-test_ha-626099_ha-626099-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099-m03 "sudo cat /home/docker/cp-test_ha-626099_ha-626099-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 cp ha-626099:/home/docker/cp-test.txt ha-626099-m04:/home/docker/cp-test_ha-626099_ha-626099-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099-m04 "sudo cat /home/docker/cp-test_ha-626099_ha-626099-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 cp testdata/cp-test.txt ha-626099-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 cp ha-626099-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1291059823/001/cp-test_ha-626099-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 cp ha-626099-m02:/home/docker/cp-test.txt ha-626099:/home/docker/cp-test_ha-626099-m02_ha-626099.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099 "sudo cat /home/docker/cp-test_ha-626099-m02_ha-626099.txt"
E1006 19:08:16.941487    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 cp ha-626099-m02:/home/docker/cp-test.txt ha-626099-m03:/home/docker/cp-test_ha-626099-m02_ha-626099-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099-m03 "sudo cat /home/docker/cp-test_ha-626099-m02_ha-626099-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 cp ha-626099-m02:/home/docker/cp-test.txt ha-626099-m04:/home/docker/cp-test_ha-626099-m02_ha-626099-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099-m04 "sudo cat /home/docker/cp-test_ha-626099-m02_ha-626099-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 cp testdata/cp-test.txt ha-626099-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 cp ha-626099-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1291059823/001/cp-test_ha-626099-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 cp ha-626099-m03:/home/docker/cp-test.txt ha-626099:/home/docker/cp-test_ha-626099-m03_ha-626099.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099 "sudo cat /home/docker/cp-test_ha-626099-m03_ha-626099.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 cp ha-626099-m03:/home/docker/cp-test.txt ha-626099-m02:/home/docker/cp-test_ha-626099-m03_ha-626099-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099-m02 "sudo cat /home/docker/cp-test_ha-626099-m03_ha-626099-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 cp ha-626099-m03:/home/docker/cp-test.txt ha-626099-m04:/home/docker/cp-test_ha-626099-m03_ha-626099-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099-m04 "sudo cat /home/docker/cp-test_ha-626099-m03_ha-626099-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 cp testdata/cp-test.txt ha-626099-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 cp ha-626099-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1291059823/001/cp-test_ha-626099-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 cp ha-626099-m04:/home/docker/cp-test.txt ha-626099:/home/docker/cp-test_ha-626099-m04_ha-626099.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099 "sudo cat /home/docker/cp-test_ha-626099-m04_ha-626099.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 cp ha-626099-m04:/home/docker/cp-test.txt ha-626099-m02:/home/docker/cp-test_ha-626099-m04_ha-626099-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099-m02 "sudo cat /home/docker/cp-test_ha-626099-m04_ha-626099-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 cp ha-626099-m04:/home/docker/cp-test.txt ha-626099-m03:/home/docker/cp-test_ha-626099-m04_ha-626099-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 ssh -n ha-626099-m03 "sudo cat /home/docker/cp-test_ha-626099-m04_ha-626099-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 node stop m02 --alsologtostderr -v 5
E1006 19:08:37.423669    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-626099 node stop m02 --alsologtostderr -v 5: (12.008491284s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-626099 status --alsologtostderr -v 5: exit status 7 (759.834897ms)

                                                
                                                
-- stdout --
	ha-626099
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-626099-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-626099-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-626099-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 19:08:40.573626   48117 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:08:40.573853   48117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:08:40.573883   48117 out.go:374] Setting ErrFile to fd 2...
	I1006 19:08:40.573918   48117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:08:40.574247   48117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:08:40.574460   48117 out.go:368] Setting JSON to false
	I1006 19:08:40.574532   48117 mustload.go:65] Loading cluster: ha-626099
	I1006 19:08:40.574609   48117 notify.go:220] Checking for updates...
	I1006 19:08:40.575782   48117 config.go:182] Loaded profile config "ha-626099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:08:40.575835   48117 status.go:174] checking status of ha-626099 ...
	I1006 19:08:40.576564   48117 cli_runner.go:164] Run: docker container inspect ha-626099 --format={{.State.Status}}
	I1006 19:08:40.595414   48117 status.go:371] ha-626099 host status = "Running" (err=<nil>)
	I1006 19:08:40.595439   48117 host.go:66] Checking if "ha-626099" exists ...
	I1006 19:08:40.595920   48117 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-626099
	I1006 19:08:40.617635   48117 host.go:66] Checking if "ha-626099" exists ...
	I1006 19:08:40.617917   48117 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 19:08:40.617968   48117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-626099
	I1006 19:08:40.642487   48117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/ha-626099/id_rsa Username:docker}
	I1006 19:08:40.738187   48117 ssh_runner.go:195] Run: systemctl --version
	I1006 19:08:40.745004   48117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:08:40.757860   48117 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:08:40.841899   48117 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-06 19:08:40.831367298 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:08:40.842438   48117 kubeconfig.go:125] found "ha-626099" server: "https://192.168.49.254:8443"
	I1006 19:08:40.842477   48117 api_server.go:166] Checking apiserver status ...
	I1006 19:08:40.842532   48117 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 19:08:40.854787   48117 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1256/cgroup
	I1006 19:08:40.864329   48117 api_server.go:182] apiserver freezer: "6:freezer:/docker/151291922d50a58c3b0c086e0376ec1a61b05c98a4c4902116df32fa87f57bec/crio/crio-770b3e146dbc8379f6144b1f02d5ee15c8f3bf1cc1ea5bd94c62b544f04d69ed"
	I1006 19:08:40.864401   48117 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/151291922d50a58c3b0c086e0376ec1a61b05c98a4c4902116df32fa87f57bec/crio/crio-770b3e146dbc8379f6144b1f02d5ee15c8f3bf1cc1ea5bd94c62b544f04d69ed/freezer.state
	I1006 19:08:40.872267   48117 api_server.go:204] freezer state: "THAWED"
	I1006 19:08:40.872316   48117 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1006 19:08:40.880921   48117 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1006 19:08:40.881001   48117 status.go:463] ha-626099 apiserver status = Running (err=<nil>)
	I1006 19:08:40.881017   48117 status.go:176] ha-626099 status: &{Name:ha-626099 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1006 19:08:40.881035   48117 status.go:174] checking status of ha-626099-m02 ...
	I1006 19:08:40.881364   48117 cli_runner.go:164] Run: docker container inspect ha-626099-m02 --format={{.State.Status}}
	I1006 19:08:40.900229   48117 status.go:371] ha-626099-m02 host status = "Stopped" (err=<nil>)
	I1006 19:08:40.900252   48117 status.go:384] host is not running, skipping remaining checks
	I1006 19:08:40.900259   48117 status.go:176] ha-626099-m02 status: &{Name:ha-626099-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1006 19:08:40.900279   48117 status.go:174] checking status of ha-626099-m03 ...
	I1006 19:08:40.900586   48117 cli_runner.go:164] Run: docker container inspect ha-626099-m03 --format={{.State.Status}}
	I1006 19:08:40.919222   48117 status.go:371] ha-626099-m03 host status = "Running" (err=<nil>)
	I1006 19:08:40.919244   48117 host.go:66] Checking if "ha-626099-m03" exists ...
	I1006 19:08:40.919553   48117 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-626099-m03
	I1006 19:08:40.937298   48117 host.go:66] Checking if "ha-626099-m03" exists ...
	I1006 19:08:40.937602   48117 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 19:08:40.937647   48117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-626099-m03
	I1006 19:08:40.956425   48117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/ha-626099-m03/id_rsa Username:docker}
	I1006 19:08:41.057808   48117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:08:41.072226   48117 kubeconfig.go:125] found "ha-626099" server: "https://192.168.49.254:8443"
	I1006 19:08:41.072257   48117 api_server.go:166] Checking apiserver status ...
	I1006 19:08:41.072351   48117 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 19:08:41.083734   48117 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup
	I1006 19:08:41.092257   48117 api_server.go:182] apiserver freezer: "6:freezer:/docker/fded9a35ab85da63d71cb92328d98f335d1666f39c8cbba64cc77ce72b64f242/crio/crio-2bd7e860c11d28e1255f157c3ab0e289d199c6da318d5c54c6d80eae2f3a9dd2"
	I1006 19:08:41.092361   48117 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/fded9a35ab85da63d71cb92328d98f335d1666f39c8cbba64cc77ce72b64f242/crio/crio-2bd7e860c11d28e1255f157c3ab0e289d199c6da318d5c54c6d80eae2f3a9dd2/freezer.state
	I1006 19:08:41.099764   48117 api_server.go:204] freezer state: "THAWED"
	I1006 19:08:41.099790   48117 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1006 19:08:41.109638   48117 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1006 19:08:41.109668   48117 status.go:463] ha-626099-m03 apiserver status = Running (err=<nil>)
	I1006 19:08:41.109677   48117 status.go:176] ha-626099-m03 status: &{Name:ha-626099-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1006 19:08:41.109694   48117 status.go:174] checking status of ha-626099-m04 ...
	I1006 19:08:41.110029   48117 cli_runner.go:164] Run: docker container inspect ha-626099-m04 --format={{.State.Status}}
	I1006 19:08:41.127175   48117 status.go:371] ha-626099-m04 host status = "Running" (err=<nil>)
	I1006 19:08:41.127194   48117 host.go:66] Checking if "ha-626099-m04" exists ...
	I1006 19:08:41.127490   48117 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-626099-m04
	I1006 19:08:41.145186   48117 host.go:66] Checking if "ha-626099-m04" exists ...
	I1006 19:08:41.145548   48117 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 19:08:41.145603   48117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-626099-m04
	I1006 19:08:41.163694   48117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/ha-626099-m04/id_rsa Username:docker}
	I1006 19:08:41.257093   48117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:08:41.272203   48117 status.go:176] ha-626099-m04 status: &{Name:ha-626099-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (29.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-626099 node start m02 --alsologtostderr -v 5: (28.069876695s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-626099 status --alsologtostderr -v 5: (1.324091673s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (29.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (156.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 stop --alsologtostderr -v 5
E1006 19:09:18.385014    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-626099 stop --alsologtostderr -v 5: (27.166628364s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 start --wait true --alsologtostderr -v 5
E1006 19:10:40.306812    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:10:48.448588    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-626099 start --wait true --alsologtostderr -v 5: (2m9.600931621s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (156.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-626099 node delete m03 --alsologtostderr -v 5: (11.048027935s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-626099 stop --alsologtostderr -v 5: (35.407481874s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-626099 status --alsologtostderr -v 5: exit status 7 (108.61197ms)

                                                
                                                
-- stdout --
	ha-626099
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-626099-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-626099-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 19:12:37.836776   60048 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:12:37.836908   60048 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:12:37.836918   60048 out.go:374] Setting ErrFile to fd 2...
	I1006 19:12:37.836923   60048 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:12:37.837164   60048 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:12:37.837352   60048 out.go:368] Setting JSON to false
	I1006 19:12:37.837385   60048 mustload.go:65] Loading cluster: ha-626099
	I1006 19:12:37.837425   60048 notify.go:220] Checking for updates...
	I1006 19:12:37.837782   60048 config.go:182] Loaded profile config "ha-626099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:12:37.837800   60048 status.go:174] checking status of ha-626099 ...
	I1006 19:12:37.838303   60048 cli_runner.go:164] Run: docker container inspect ha-626099 --format={{.State.Status}}
	I1006 19:12:37.857583   60048 status.go:371] ha-626099 host status = "Stopped" (err=<nil>)
	I1006 19:12:37.857606   60048 status.go:384] host is not running, skipping remaining checks
	I1006 19:12:37.857612   60048 status.go:176] ha-626099 status: &{Name:ha-626099 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1006 19:12:37.857642   60048 status.go:174] checking status of ha-626099-m02 ...
	I1006 19:12:37.857938   60048 cli_runner.go:164] Run: docker container inspect ha-626099-m02 --format={{.State.Status}}
	I1006 19:12:37.877062   60048 status.go:371] ha-626099-m02 host status = "Stopped" (err=<nil>)
	I1006 19:12:37.877087   60048 status.go:384] host is not running, skipping remaining checks
	I1006 19:12:37.877094   60048 status.go:176] ha-626099-m02 status: &{Name:ha-626099-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1006 19:12:37.877129   60048 status.go:174] checking status of ha-626099-m04 ...
	I1006 19:12:37.877414   60048 cli_runner.go:164] Run: docker container inspect ha-626099-m04 --format={{.State.Status}}
	I1006 19:12:37.898337   60048 status.go:371] ha-626099-m04 host status = "Stopped" (err=<nil>)
	I1006 19:12:37.898357   60048 status.go:384] host is not running, skipping remaining checks
	I1006 19:12:37.898364   60048 status.go:176] ha-626099-m04 status: &{Name:ha-626099-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (89.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1006 19:12:56.447574    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:13:24.148119    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-626099 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m28.592922243s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (89.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-626099 node add --control-plane --alsologtostderr -v 5: (1m18.580331901s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-626099 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-626099 status --alsologtostderr -v 5: (1.006972999s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.053543702s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.24s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-692334 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1006 19:15:48.449434    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-692334 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m22.234916614s)
--- PASS: TestJSONOutput/start/Command (82.24s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.71s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-692334 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-692334 --output=json --user=testUser: (5.710961362s)
--- PASS: TestJSONOutput/stop/Command (5.71s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-368456 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-368456 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (92.782001ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"de3fd606-de80-4184-944f-ed73269d91f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-368456] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d19491de-9d23-4e38-8c0c-38d34b33e8f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21701"}}
	{"specversion":"1.0","id":"3ca42118-1177-478d-8204-cb1e88ea574f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f8e07ac4-3a1d-4814-a13b-09c0f58bccc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig"}}
	{"specversion":"1.0","id":"75c481fa-e934-4176-93b2-9f2d604d4244","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube"}}
	{"specversion":"1.0","id":"f7c44708-34cc-4880-9417-8e0a49811439","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"07f2e130-c2ae-40ee-8050-6887fc2023d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"eff5b8ef-3e5a-4c72-933e-ad2612e66a67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-368456" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-368456
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (44.31s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-389438 --network=
E1006 19:17:56.447540    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-389438 --network=: (42.160691218s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-389438" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-389438
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-389438: (2.126167948s)
--- PASS: TestKicCustomNetwork/create_custom_network (44.31s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (40.09s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-623455 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-623455 --network=bridge: (38.112755272s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-623455" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-623455
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-623455: (1.944830661s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (40.09s)

                                                
                                    
x
+
TestKicExistingNetwork (36.79s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1006 19:18:39.398333    4350 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1006 19:18:39.414950    4350 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1006 19:18:39.415026    4350 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1006 19:18:39.415043    4350 cli_runner.go:164] Run: docker network inspect existing-network
W1006 19:18:39.431656    4350 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1006 19:18:39.431689    4350 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1006 19:18:39.431743    4350 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1006 19:18:39.431870    4350 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1006 19:18:39.448122    4350 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7058eae896da IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:57:c0:bd:de:1b} reservation:<nil>}
I1006 19:18:39.448389    4350 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d711a0}
I1006 19:18:39.448414    4350 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1006 19:18:39.448463    4350 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1006 19:18:39.505288    4350 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-309394 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-309394 --network=existing-network: (34.649453771s)
helpers_test.go:175: Cleaning up "existing-network-309394" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-309394
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-309394: (2.000637747s)
I1006 19:19:16.171884    4350 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (36.79s)

                                                
                                    
x
+
TestKicCustomSubnet (35.54s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-614223 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-614223 --subnet=192.168.60.0/24: (33.360154924s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-614223 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-614223" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-614223
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-614223: (2.144952933s)
--- PASS: TestKicCustomSubnet (35.54s)

                                                
                                    
x
+
TestKicStaticIP (34.52s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-686377 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-686377 --static-ip=192.168.200.200: (32.3164186s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-686377 ip
helpers_test.go:175: Cleaning up "static-ip-686377" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-686377
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-686377: (2.04444628s)
--- PASS: TestKicStaticIP (34.52s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (68.32s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-277550 --driver=docker  --container-runtime=crio
E1006 19:20:48.449435    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-277550 --driver=docker  --container-runtime=crio: (31.377125472s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-280142 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-280142 --driver=docker  --container-runtime=crio: (31.271705795s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-277550
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-280142
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-280142" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-280142
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-280142: (1.989518106s)
helpers_test.go:175: Cleaning up "first-277550" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-277550
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-277550: (2.232785943s)
--- PASS: TestMinikubeProfile (68.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.11s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-315606 --memory=3072 --mount-string /tmp/TestMountStartserial3971696053/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-315606 --memory=3072 --mount-string /tmp/TestMountStartserial3971696053/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.112923763s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-315606 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.68s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-317436 --memory=3072 --mount-string /tmp/TestMountStartserial3971696053/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-317436 --memory=3072 --mount-string /tmp/TestMountStartserial3971696053/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.679982734s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-317436 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-315606 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-315606 --alsologtostderr -v=5: (1.631154932s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-317436 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-317436
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-317436: (1.219052058s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.25s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-317436
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-317436: (7.252049333s)
--- PASS: TestMountStart/serial/RestartStopped (8.25s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-317436 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (141.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-237922 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1006 19:22:56.447564    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:23:51.521857    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:24:19.510283    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-237922 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m21.074836814s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (141.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-237922 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-237922 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-237922 -- rollout status deployment/busybox: (3.34278649s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-237922 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-237922 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-237922 -- exec busybox-7b57f96db7-8dlc6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-237922 -- exec busybox-7b57f96db7-nznqs -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-237922 -- exec busybox-7b57f96db7-8dlc6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-237922 -- exec busybox-7b57f96db7-nznqs -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-237922 -- exec busybox-7b57f96db7-8dlc6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-237922 -- exec busybox-7b57f96db7-nznqs -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.21s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-237922 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-237922 -- exec busybox-7b57f96db7-8dlc6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-237922 -- exec busybox-7b57f96db7-8dlc6 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-237922 -- exec busybox-7b57f96db7-nznqs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-237922 -- exec busybox-7b57f96db7-nznqs -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (60.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-237922 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-237922 -v=5 --alsologtostderr: (59.80251341s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (60.54s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-237922 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 cp testdata/cp-test.txt multinode-237922:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 ssh -n multinode-237922 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 cp multinode-237922:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1553238676/001/cp-test_multinode-237922.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 ssh -n multinode-237922 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 cp multinode-237922:/home/docker/cp-test.txt multinode-237922-m02:/home/docker/cp-test_multinode-237922_multinode-237922-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 ssh -n multinode-237922 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 ssh -n multinode-237922-m02 "sudo cat /home/docker/cp-test_multinode-237922_multinode-237922-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 cp multinode-237922:/home/docker/cp-test.txt multinode-237922-m03:/home/docker/cp-test_multinode-237922_multinode-237922-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 ssh -n multinode-237922 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 ssh -n multinode-237922-m03 "sudo cat /home/docker/cp-test_multinode-237922_multinode-237922-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 cp testdata/cp-test.txt multinode-237922-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 ssh -n multinode-237922-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 cp multinode-237922-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1553238676/001/cp-test_multinode-237922-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 ssh -n multinode-237922-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 cp multinode-237922-m02:/home/docker/cp-test.txt multinode-237922:/home/docker/cp-test_multinode-237922-m02_multinode-237922.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 ssh -n multinode-237922-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 ssh -n multinode-237922 "sudo cat /home/docker/cp-test_multinode-237922-m02_multinode-237922.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 cp multinode-237922-m02:/home/docker/cp-test.txt multinode-237922-m03:/home/docker/cp-test_multinode-237922-m02_multinode-237922-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 ssh -n multinode-237922-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 ssh -n multinode-237922-m03 "sudo cat /home/docker/cp-test_multinode-237922-m02_multinode-237922-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 cp testdata/cp-test.txt multinode-237922-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 ssh -n multinode-237922-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 cp multinode-237922-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1553238676/001/cp-test_multinode-237922-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 ssh -n multinode-237922-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 cp multinode-237922-m03:/home/docker/cp-test.txt multinode-237922:/home/docker/cp-test_multinode-237922-m03_multinode-237922.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 ssh -n multinode-237922-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 ssh -n multinode-237922 "sudo cat /home/docker/cp-test_multinode-237922-m03_multinode-237922.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 cp multinode-237922-m03:/home/docker/cp-test.txt multinode-237922-m02:/home/docker/cp-test_multinode-237922-m03_multinode-237922-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 ssh -n multinode-237922-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 ssh -n multinode-237922-m02 "sudo cat /home/docker/cp-test_multinode-237922-m03_multinode-237922-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.51s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-237922 node stop m03: (1.214055371s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 status
E1006 19:25:48.448923    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-237922 status: exit status 7 (537.554521ms)

                                                
                                                
-- stdout --
	multinode-237922
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-237922-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-237922-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-237922 status --alsologtostderr: exit status 7 (534.752511ms)

                                                
                                                
-- stdout --
	multinode-237922
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-237922-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-237922-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 19:25:48.749887  110420 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:25:48.750072  110420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:25:48.750103  110420 out.go:374] Setting ErrFile to fd 2...
	I1006 19:25:48.750125  110420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:25:48.750385  110420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:25:48.750596  110420 out.go:368] Setting JSON to false
	I1006 19:25:48.750665  110420 mustload.go:65] Loading cluster: multinode-237922
	I1006 19:25:48.750740  110420 notify.go:220] Checking for updates...
	I1006 19:25:48.751729  110420 config.go:182] Loaded profile config "multinode-237922": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:25:48.751785  110420 status.go:174] checking status of multinode-237922 ...
	I1006 19:25:48.752412  110420 cli_runner.go:164] Run: docker container inspect multinode-237922 --format={{.State.Status}}
	I1006 19:25:48.774833  110420 status.go:371] multinode-237922 host status = "Running" (err=<nil>)
	I1006 19:25:48.774854  110420 host.go:66] Checking if "multinode-237922" exists ...
	I1006 19:25:48.775208  110420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-237922
	I1006 19:25:48.800557  110420 host.go:66] Checking if "multinode-237922" exists ...
	I1006 19:25:48.800827  110420 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 19:25:48.800870  110420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-237922
	I1006 19:25:48.823918  110420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32905 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/multinode-237922/id_rsa Username:docker}
	I1006 19:25:48.924421  110420 ssh_runner.go:195] Run: systemctl --version
	I1006 19:25:48.932694  110420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:25:48.946713  110420 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:25:49.010941  110420 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-06 19:25:49.000832018 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:25:49.011682  110420 kubeconfig.go:125] found "multinode-237922" server: "https://192.168.67.2:8443"
	I1006 19:25:49.011811  110420 api_server.go:166] Checking apiserver status ...
	I1006 19:25:49.011857  110420 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 19:25:49.023271  110420 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1246/cgroup
	I1006 19:25:49.031539  110420 api_server.go:182] apiserver freezer: "6:freezer:/docker/960f7ae4e147f98c64c691a1d6d6f5bdf34ca723f67fcf5ec06cfa61b7069cdf/crio/crio-619897ce41cc00d07238b826eb75777b6947c5dfbeffe31894ffa74a2e1cd319"
	I1006 19:25:49.031605  110420 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/960f7ae4e147f98c64c691a1d6d6f5bdf34ca723f67fcf5ec06cfa61b7069cdf/crio/crio-619897ce41cc00d07238b826eb75777b6947c5dfbeffe31894ffa74a2e1cd319/freezer.state
	I1006 19:25:49.041152  110420 api_server.go:204] freezer state: "THAWED"
	I1006 19:25:49.041183  110420 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 19:25:49.051248  110420 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1006 19:25:49.051279  110420 status.go:463] multinode-237922 apiserver status = Running (err=<nil>)
	I1006 19:25:49.051291  110420 status.go:176] multinode-237922 status: &{Name:multinode-237922 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1006 19:25:49.051309  110420 status.go:174] checking status of multinode-237922-m02 ...
	I1006 19:25:49.051625  110420 cli_runner.go:164] Run: docker container inspect multinode-237922-m02 --format={{.State.Status}}
	I1006 19:25:49.068866  110420 status.go:371] multinode-237922-m02 host status = "Running" (err=<nil>)
	I1006 19:25:49.068890  110420 host.go:66] Checking if "multinode-237922-m02" exists ...
	I1006 19:25:49.069198  110420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-237922-m02
	I1006 19:25:49.086993  110420 host.go:66] Checking if "multinode-237922-m02" exists ...
	I1006 19:25:49.087302  110420 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 19:25:49.087346  110420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-237922-m02
	I1006 19:25:49.104246  110420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32910 SSHKeyPath:/home/jenkins/minikube-integration/21701-2540/.minikube/machines/multinode-237922-m02/id_rsa Username:docker}
	I1006 19:25:49.196639  110420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 19:25:49.210382  110420 status.go:176] multinode-237922-m02 status: &{Name:multinode-237922-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1006 19:25:49.210419  110420 status.go:174] checking status of multinode-237922-m03 ...
	I1006 19:25:49.210751  110420 cli_runner.go:164] Run: docker container inspect multinode-237922-m03 --format={{.State.Status}}
	I1006 19:25:49.227934  110420 status.go:371] multinode-237922-m03 host status = "Stopped" (err=<nil>)
	I1006 19:25:49.227958  110420 status.go:384] host is not running, skipping remaining checks
	I1006 19:25:49.227974  110420 status.go:176] multinode-237922-m03 status: &{Name:multinode-237922-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-237922 node start m03 -v=5 --alsologtostderr: (7.093839405s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (74.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-237922
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-237922
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-237922: (24.696811034s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-237922 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-237922 --wait=true -v=5 --alsologtostderr: (49.233587622s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-237922
--- PASS: TestMultiNode/serial/RestartKeepsNodes (74.06s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-237922 node delete m03: (4.903840308s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.57s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-237922 stop: (23.631680622s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-237922 status: exit status 7 (102.545324ms)

                                                
                                                
-- stdout --
	multinode-237922
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-237922-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-237922 status --alsologtostderr: exit status 7 (89.689443ms)

                                                
                                                
-- stdout --
	multinode-237922
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-237922-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 19:27:40.513914  118145 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:27:40.514095  118145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:27:40.514120  118145 out.go:374] Setting ErrFile to fd 2...
	I1006 19:27:40.514139  118145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:27:40.514422  118145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:27:40.514679  118145 out.go:368] Setting JSON to false
	I1006 19:27:40.514735  118145 mustload.go:65] Loading cluster: multinode-237922
	I1006 19:27:40.514816  118145 notify.go:220] Checking for updates...
	I1006 19:27:40.516107  118145 config.go:182] Loaded profile config "multinode-237922": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:27:40.516159  118145 status.go:174] checking status of multinode-237922 ...
	I1006 19:27:40.517477  118145 cli_runner.go:164] Run: docker container inspect multinode-237922 --format={{.State.Status}}
	I1006 19:27:40.535409  118145 status.go:371] multinode-237922 host status = "Stopped" (err=<nil>)
	I1006 19:27:40.535430  118145 status.go:384] host is not running, skipping remaining checks
	I1006 19:27:40.535437  118145 status.go:176] multinode-237922 status: &{Name:multinode-237922 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1006 19:27:40.535462  118145 status.go:174] checking status of multinode-237922-m02 ...
	I1006 19:27:40.536006  118145 cli_runner.go:164] Run: docker container inspect multinode-237922-m02 --format={{.State.Status}}
	I1006 19:27:40.558280  118145 status.go:371] multinode-237922-m02 host status = "Stopped" (err=<nil>)
	I1006 19:27:40.558306  118145 status.go:384] host is not running, skipping remaining checks
	I1006 19:27:40.558312  118145 status.go:176] multinode-237922-m02 status: &{Name:multinode-237922-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-237922 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1006 19:27:56.447533    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-237922 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (51.773565944s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-237922 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.45s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-237922
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-237922-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-237922-m02 --driver=docker  --container-runtime=crio: exit status 14 (93.557972ms)

                                                
                                                
-- stdout --
	* [multinode-237922-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-237922-m02' is duplicated with machine name 'multinode-237922-m02' in profile 'multinode-237922'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-237922-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-237922-m03 --driver=docker  --container-runtime=crio: (32.461393411s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-237922
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-237922: exit status 80 (316.698899ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-237922 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-237922-m03 already exists in multinode-237922-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-237922-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-237922-m03: (1.929407968s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.86s)

                                                
                                    
x
+
TestPreload (124.28s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-139643 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-139643 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m3.76788206s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-139643 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-139643 image pull gcr.io/k8s-minikube/busybox: (2.161426712s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-139643
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-139643: (5.739891452s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-139643 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1006 19:30:48.449064    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-139643 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (50.049699148s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-139643 image list
helpers_test.go:175: Cleaning up "test-preload-139643" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-139643
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-139643: (2.330151166s)
--- PASS: TestPreload (124.28s)

                                                
                                    
x
+
TestScheduledStopUnix (110.45s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-311643 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-311643 --memory=3072 --driver=docker  --container-runtime=crio: (33.34210885s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-311643 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-311643 -n scheduled-stop-311643
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-311643 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1006 19:31:50.094680    4350 retry.go:31] will retry after 54.818µs: open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/scheduled-stop-311643/pid: no such file or directory
I1006 19:31:50.094816    4350 retry.go:31] will retry after 92.051µs: open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/scheduled-stop-311643/pid: no such file or directory
I1006 19:31:50.095206    4350 retry.go:31] will retry after 181.05µs: open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/scheduled-stop-311643/pid: no such file or directory
I1006 19:31:50.095510    4350 retry.go:31] will retry after 213.606µs: open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/scheduled-stop-311643/pid: no such file or directory
I1006 19:31:50.095937    4350 retry.go:31] will retry after 729.827µs: open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/scheduled-stop-311643/pid: no such file or directory
I1006 19:31:50.097053    4350 retry.go:31] will retry after 511.524µs: open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/scheduled-stop-311643/pid: no such file or directory
I1006 19:31:50.098175    4350 retry.go:31] will retry after 1.376226ms: open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/scheduled-stop-311643/pid: no such file or directory
I1006 19:31:50.100600    4350 retry.go:31] will retry after 1.464689ms: open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/scheduled-stop-311643/pid: no such file or directory
I1006 19:31:50.102774    4350 retry.go:31] will retry after 2.055225ms: open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/scheduled-stop-311643/pid: no such file or directory
I1006 19:31:50.104921    4350 retry.go:31] will retry after 3.32588ms: open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/scheduled-stop-311643/pid: no such file or directory
I1006 19:31:50.109206    4350 retry.go:31] will retry after 4.305295ms: open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/scheduled-stop-311643/pid: no such file or directory
I1006 19:31:50.114514    4350 retry.go:31] will retry after 5.157126ms: open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/scheduled-stop-311643/pid: no such file or directory
I1006 19:31:50.120933    4350 retry.go:31] will retry after 18.78682ms: open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/scheduled-stop-311643/pid: no such file or directory
I1006 19:31:50.140194    4350 retry.go:31] will retry after 17.471474ms: open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/scheduled-stop-311643/pid: no such file or directory
I1006 19:31:50.158419    4350 retry.go:31] will retry after 27.681179ms: open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/scheduled-stop-311643/pid: no such file or directory
I1006 19:31:50.186651    4350 retry.go:31] will retry after 63.53938ms: open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/scheduled-stop-311643/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-311643 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-311643 -n scheduled-stop-311643
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-311643
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-311643 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1006 19:32:56.454031    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-311643
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-311643: exit status 7 (70.778748ms)

                                                
                                                
-- stdout --
	scheduled-stop-311643
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-311643 -n scheduled-stop-311643
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-311643 -n scheduled-stop-311643: exit status 7 (68.961543ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-311643" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-311643
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-311643: (5.476201008s)
--- PASS: TestScheduledStopUnix (110.45s)

                                                
                                    
x
+
TestInsufficientStorage (14.38s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-398274 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-398274 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.879981322s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"de201623-9a8e-4f0c-88cd-835b2466c204","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-398274] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8cdf0703-98f5-4c8a-a7df-df33db3f35c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21701"}}
	{"specversion":"1.0","id":"d2ee3509-303c-45c6-8d4f-ca315fd7e038","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"53d1cec4-50f8-4471-9b8e-16125e03c096","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig"}}
	{"specversion":"1.0","id":"93670aad-e608-48fa-991b-99003ec3040c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube"}}
	{"specversion":"1.0","id":"57d4816a-081e-4fb2-b579-f02de38a3190","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"118bd914-8d60-48f9-9960-392bdcbb513d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8306eef9-6af0-4688-9d8c-93b3dff5d95d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"ecb41da2-be02-44b3-a371-af2437514caa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"95ad3a6a-8a93-492e-a7d2-72d18ab87b99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"61186d59-c438-4668-bc0e-e292eb3aebf9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"0553d2eb-a07b-44e3-be51-589798c13d48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-398274\" primary control-plane node in \"insufficient-storage-398274\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"712ffd56-b888-4193-9c00-a923e78732b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1759382731-21643 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"171234de-517a-4148-911c-33e3f10f918f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"5c5ec8e4-3c09-468d-8e54-9df717a78b09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-398274 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-398274 --output=json --layout=cluster: exit status 7 (290.690692ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-398274","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-398274","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 19:33:18.830918  134301 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-398274" does not appear in /home/jenkins/minikube-integration/21701-2540/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-398274 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-398274 --output=json --layout=cluster: exit status 7 (298.218439ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-398274","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-398274","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 19:33:19.127935  134366 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-398274" does not appear in /home/jenkins/minikube-integration/21701-2540/kubeconfig
	E1006 19:33:19.138687  134366 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/insufficient-storage-398274/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-398274" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-398274
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-398274: (1.912056722s)
--- PASS: TestInsufficientStorage (14.38s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (59.12s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2179797893 start -p running-upgrade-462878 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2179797893 start -p running-upgrade-462878 --memory=3072 --vm-driver=docker  --container-runtime=crio: (34.668315292s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-462878 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-462878 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.207585143s)
helpers_test.go:175: Cleaning up "running-upgrade-462878" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-462878
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-462878: (2.079260886s)
--- PASS: TestRunningBinaryUpgrade (59.12s)

                                                
                                    
x
+
TestKubernetesUpgrade (363.44s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-977990 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-977990 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.940421034s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-977990
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-977990: (1.291454726s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-977990 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-977990 status --format={{.Host}}: exit status 7 (92.709879ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-977990 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1006 19:35:48.449412    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-977990 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m40.904725136s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-977990 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-977990 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-977990 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (101.405938ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-977990] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-977990
	    minikube start -p kubernetes-upgrade-977990 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9779902 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-977990 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-977990 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1006 19:40:31.523850    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-977990 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.942943776s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-977990" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-977990
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-977990: (2.075346202s)
--- PASS: TestKubernetesUpgrade (363.44s)

                                                
                                    
x
+
TestMissingContainerUpgrade (117.52s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.4036018886 start -p missing-upgrade-911983 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.4036018886 start -p missing-upgrade-911983 --memory=3072 --driver=docker  --container-runtime=crio: (1m4.205677471s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-911983
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-911983
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-911983 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-911983 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (47.658367113s)
helpers_test.go:175: Cleaning up "missing-upgrade-911983" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-911983
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-911983: (2.349629975s)
--- PASS: TestMissingContainerUpgrade (117.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-262772 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-262772 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (98.01097ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-262772] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (45.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-262772 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-262772 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (45.196632948s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-262772 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (45.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-262772 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-262772 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.670017932s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-262772 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-262772 status -o json: exit status 2 (392.302764ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-262772","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-262772
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-262772: (1.949208789s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-262772 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-262772 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.926250516s)
--- PASS: TestNoKubernetes/serial/Start (9.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-262772 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-262772 "sudo systemctl is-active --quiet service kubelet": exit status 1 (365.698113ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-arm64 profile list --output=json: (1.109828372s)
--- PASS: TestNoKubernetes/serial/ProfileList (1.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-262772
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-262772: (1.310601266s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-262772 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-262772 --driver=docker  --container-runtime=crio: (6.962825446s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-262772 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-262772 "sudo systemctl is-active --quiet service kubelet": exit status 1 (268.056442ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (60.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2378260226 start -p stopped-upgrade-360545 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2378260226 start -p stopped-upgrade-360545 --memory=3072 --vm-driver=docker  --container-runtime=crio: (39.712043969s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2378260226 -p stopped-upgrade-360545 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2378260226 -p stopped-upgrade-360545 stop: (1.237278871s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-360545 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-360545 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.608320582s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (60.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-360545
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-360545: (1.164078811s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                    
x
+
TestPause/serial/Start (79.22s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-719933 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1006 19:37:56.446920    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-719933 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m19.224837761s)
--- PASS: TestPause/serial/Start (79.22s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (27.43s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-719933 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-719933 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.40411697s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (27.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-053944 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-053944 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (195.154799ms)

                                                
                                                
-- stdout --
	* [false-053944] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 19:40:44.117391  173600 out.go:360] Setting OutFile to fd 1 ...
	I1006 19:40:44.117595  173600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:40:44.117623  173600 out.go:374] Setting ErrFile to fd 2...
	I1006 19:40:44.117642  173600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 19:40:44.117946  173600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-2540/.minikube/bin
	I1006 19:40:44.118501  173600 out.go:368] Setting JSON to false
	I1006 19:40:44.119418  173600 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4980,"bootTime":1759774665,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1006 19:40:44.119521  173600 start.go:140] virtualization:  
	I1006 19:40:44.123123  173600 out.go:179] * [false-053944] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 19:40:44.126201  173600 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 19:40:44.126266  173600 notify.go:220] Checking for updates...
	I1006 19:40:44.131990  173600 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 19:40:44.134841  173600 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-2540/kubeconfig
	I1006 19:40:44.137814  173600 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-2540/.minikube
	I1006 19:40:44.140744  173600 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 19:40:44.143665  173600 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 19:40:44.147128  173600 config.go:182] Loaded profile config "force-systemd-flag-203169": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1006 19:40:44.147278  173600 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 19:40:44.175877  173600 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 19:40:44.176006  173600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 19:40:44.236485  173600 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 19:40:44.227672732 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 19:40:44.236597  173600 docker.go:318] overlay module found
	I1006 19:40:44.240257  173600 out.go:179] * Using the docker driver based on user configuration
	I1006 19:40:44.245044  173600 start.go:304] selected driver: docker
	I1006 19:40:44.245068  173600 start.go:924] validating driver "docker" against <nil>
	I1006 19:40:44.245082  173600 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 19:40:44.251274  173600 out.go:203] 
	W1006 19:40:44.254891  173600 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1006 19:40:44.257878  173600 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-053944 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-053944

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-053944

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-053944

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-053944

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-053944

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-053944

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-053944

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-053944

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-053944

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-053944

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-053944

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-053944" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-053944" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-053944

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-053944"

                                                
                                                
----------------------- debugLogs end: false-053944 [took: 3.501620323s] --------------------------------
helpers_test.go:175: Cleaning up "false-053944" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-053944
--- PASS: TestNetworkPlugins/group/false (3.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (59.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-100545 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1006 19:50:48.449096    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-100545 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (59.175167988s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (59.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-100545 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6d3b5c33-4fd7-4e24-8775-4f0f4c5f0a46] Pending
helpers_test.go:352: "busybox" [6d3b5c33-4fd7-4e24-8775-4f0f4c5f0a46] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6d3b5c33-4fd7-4e24-8775-4f0f4c5f0a46] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003095459s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-100545 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-100545 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-100545 --alsologtostderr -v=3: (11.896521821s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-100545 -n old-k8s-version-100545
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-100545 -n old-k8s-version-100545: exit status 7 (85.447453ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-100545 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (47.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-100545 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-100545 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (46.690523317s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-100545 -n old-k8s-version-100545
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (47.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-c7sw4" [953e3fd1-a661-4e2a-9079-eeeb2e0e3746] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004049481s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-c7sw4" [953e3fd1-a661-4e2a-9079-eeeb2e0e3746] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003474867s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-100545 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-100545 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-314275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1006 19:52:56.447320    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-314275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m16.926234677s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (83.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-830393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-830393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m23.460150274s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (83.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-314275 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [455522a4-5398-4b39-bd9c-3c0361fb193f] Pending
helpers_test.go:352: "busybox" [455522a4-5398-4b39-bd9c-3c0361fb193f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [455522a4-5398-4b39-bd9c-3c0361fb193f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004251207s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-314275 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-314275 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-314275 --alsologtostderr -v=3: (12.146850052s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-314275 -n no-preload-314275
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-314275 -n no-preload-314275: exit status 7 (76.895941ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-314275 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (50.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-314275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-314275 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (49.88637732s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-314275 -n no-preload-314275
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (50.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-830393 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ce5b9cbf-2167-4e11-9e30-7b122bb80999] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ce5b9cbf-2167-4e11-9e30-7b122bb80999] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003047623s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-830393 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-830393 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-830393 --alsologtostderr -v=3: (12.226858997s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k8dzl" [e27e880a-6eee-4ece-b6b7-14cc5f631c89] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002987782s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k8dzl" [e27e880a-6eee-4ece-b6b7-14cc5f631c89] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003572332s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-314275 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-314275 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-830393 -n embed-certs-830393
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-830393 -n embed-certs-830393: exit status 7 (90.89822ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-830393 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (53.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-830393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-830393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (53.064881015s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-830393 -n embed-certs-830393
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (53.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-997276 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1006 19:55:48.449357    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:55:56.909724    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:55:56.916026    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:55:56.927359    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:55:56.948710    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:55:56.990049    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:55:57.071393    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:55:57.232829    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:55:57.554853    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:55:58.196954    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:55:59.478929    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:56:02.040913    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-997276 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m24.532316852s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dg6tb" [8d3356fd-da8c-4b19-9b5c-acf2329fb3d9] Running
E1006 19:56:07.162912    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003741368s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dg6tb" [8d3356fd-da8c-4b19-9b5c-acf2329fb3d9] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00342489s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-830393 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-830393 image list --format=json
E1006 19:56:17.404797    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-988436 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1006 19:56:37.886924    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-988436 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (39.762829399s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-997276 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [53963bee-94d1-4ca1-8020-154e6f994193] Pending
helpers_test.go:352: "busybox" [53963bee-94d1-4ca1-8020-154e6f994193] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [53963bee-94d1-4ca1-8020-154e6f994193] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003315861s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-997276 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-997276 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-997276 --alsologtostderr -v=3: (12.224034096s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-988436 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-988436 --alsologtostderr -v=3: (1.219064017s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-988436 -n newest-cni-988436
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-988436 -n newest-cni-988436: exit status 7 (65.127257ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-988436 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (22.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-988436 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1006 19:57:11.526350    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-988436 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (21.461015477s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-988436 -n newest-cni-988436
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (22.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-997276 -n default-k8s-diff-port-997276
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-997276 -n default-k8s-diff-port-997276: exit status 7 (67.367791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-997276 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-997276 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1006 19:57:18.848648    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-997276 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (54.554433485s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-997276 -n default-k8s-diff-port-997276
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-988436 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (84.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-053944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1006 19:57:56.447535    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-053944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m24.696533758s)
--- PASS: TestNetworkPlugins/group/auto/Start (84.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-l9n6g" [6043dfa7-271b-4e1a-be38-b574fee8ce17] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004195177s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-l9n6g" [6043dfa7-271b-4e1a-be38-b574fee8ce17] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003588056s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-997276 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-997276 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-053944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1006 19:58:40.770238    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:58:46.018100    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/no-preload-314275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:58:46.024519    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/no-preload-314275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:58:46.036424    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/no-preload-314275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:58:46.057822    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/no-preload-314275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:58:46.099230    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/no-preload-314275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:58:46.180501    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/no-preload-314275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:58:46.342277    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/no-preload-314275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:58:46.664094    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/no-preload-314275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:58:47.306200    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/no-preload-314275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:58:48.587870    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/no-preload-314275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:58:51.150019    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/no-preload-314275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:58:56.272073    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/no-preload-314275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 19:59:06.514278    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/no-preload-314275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-053944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m25.001555705s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (85.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-053944 "pgrep -a kubelet"
I1006 19:59:08.780957    4350 config.go:182] Loaded profile config "auto-053944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-053944 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lb654" [678b5a07-7341-4b96-bd54-3f9dd5dbc1f8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lb654" [678b5a07-7341-4b96-bd54-3f9dd5dbc1f8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.003137816s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-053944 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-053944 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-053944 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (73.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-053944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-053944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m13.0951597s)
--- PASS: TestNetworkPlugins/group/calico/Start (73.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-425q4" [8ac7982e-d958-45f5-829d-51c8c19cf4f7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.016161997s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-053944 "pgrep -a kubelet"
I1006 20:00:02.250230    4350 config.go:182] Loaded profile config "kindnet-053944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-053944 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mhmkc" [8b86e044-0dc4-4c46-b68a-aa28e1eaae52] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1006 20:00:07.957692    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/no-preload-314275/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-mhmkc" [8b86e044-0dc4-4c46-b68a-aa28e1eaae52] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004845887s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-053944 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-053944 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-053944 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-053944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1006 20:00:48.449468    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-053944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m3.695684723s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-2dxrt" [57bac031-478b-4de7-bc9d-a411bc70ce63] Running
E1006 20:00:56.910501    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/old-k8s-version-100545/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.032783245s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-053944 "pgrep -a kubelet"
I1006 20:01:00.798211    4350 config.go:182] Loaded profile config "calico-053944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-053944 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kqwlr" [1ec575e6-ea5f-4c7a-893d-7e74a6d8fc56] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kqwlr" [1ec575e6-ea5f-4c7a-893d-7e74a6d8fc56] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004175648s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-053944 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-053944 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-053944 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (77.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-053944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-053944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m17.595470688s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (77.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-053944 "pgrep -a kubelet"
I1006 20:01:43.582897    4350 config.go:182] Loaded profile config "custom-flannel-053944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-053944 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-f45f9" [10b4a46c-2a67-4ad8-a6ef-12920fda8e99] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1006 20:01:48.721339    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 20:01:48.727771    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 20:01:48.739964    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 20:01:48.761986    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 20:01:48.803352    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-f45f9" [10b4a46c-2a67-4ad8-a6ef-12920fda8e99] Running
E1006 20:01:48.884961    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 20:01:49.047074    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 20:01:49.368347    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 20:01:50.010238    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 20:01:51.292258    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004465847s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-053944 exec deployment/netcat -- nslookup kubernetes.default
E1006 20:01:53.854422    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-053944 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-053944 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-053944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1006 20:02:29.699387    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/default-k8s-diff-port-997276/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-053944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m3.527533437s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-053944 "pgrep -a kubelet"
I1006 20:02:54.195230    4350 config.go:182] Loaded profile config "enable-default-cni-053944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-053944 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-svtc8" [23585a46-ee55-4f6a-b0de-9ce707b12c09] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1006 20:02:56.447623    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/functional-184058/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-svtc8" [23585a46-ee55-4f6a-b0de-9ce707b12c09] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003807403s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-053944 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-053944 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-053944 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-5jkt4" [e9aa1823-77f7-4036-937a-3db43794de96] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003883143s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (48.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-053944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-053944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (48.681984883s)
--- PASS: TestNetworkPlugins/group/bridge/Start (48.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-053944 "pgrep -a kubelet"
I1006 20:03:29.808075    4350 config.go:182] Loaded profile config "flannel-053944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-053944 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nshwd" [cb65c74c-b502-4bf3-8209-aa22045e3e47] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nshwd" [cb65c74c-b502-4bf3-8209-aa22045e3e47] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.022493691s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-053944 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-053944 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-053944 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-053944 "pgrep -a kubelet"
I1006 20:04:15.914931    4350 config.go:182] Loaded profile config "bridge-053944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-053944 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gj2sv" [fd963fa8-af94-4c92-bf25-0551734a77c1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1006 20:04:19.337766    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/auto-053944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-gj2sv" [fd963fa8-af94-4c92-bf25-0551734a77c1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.00368175s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-053944 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-053944 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-053944 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (31/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.45s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-993189 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-993189" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-993189
--- SKIP: TestDownloadOnlyKic (0.45s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-932453" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-932453
--- SKIP: TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-053944 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-053944

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-053944

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-053944

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-053944

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-053944

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-053944

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-053944

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-053944

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-053944

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-053944

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-053944

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-053944" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-053944" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-053944

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-053944"

                                                
                                                
----------------------- debugLogs end: kubenet-053944 [took: 3.268957044s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-053944" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-053944
--- SKIP: TestNetworkPlugins/group/kubenet (3.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E1006 19:40:48.448959    4350 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-2540/.minikube/profiles/addons-442328/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic.go:636: 
----------------------- debugLogs start: cilium-053944 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-053944

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-053944

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-053944

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-053944

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-053944

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-053944

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-053944

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-053944

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-053944

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-053944

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-053944

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-053944" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-053944

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-053944

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-053944

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-053944

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-053944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-053944" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-053944

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-053944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-053944"

                                                
                                                
----------------------- debugLogs end: cilium-053944 [took: 3.674207712s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-053944" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-053944
--- SKIP: TestNetworkPlugins/group/cilium (3.83s)

                                                
                                    
Copied to clipboard